IGF 2023 Reports

IGF 2023 Open Forum #57 Procuring modern security standards by governments&industry

Updated: Thu, 07/12/2023 - 18:31
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

1. Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Such standards have been developed, but their use needs to increase significantly to make them fully effective. Procurement policies have proven to be an effective means of ensuring that these standards get traction and are used more widely.

,

2. Not using modern standards is a risk for the individual internet user. However, often users are not aware of it (because standards are "under the hood") and there are economic network effects that prevent users from fully benefiting immediately ("first mover disadvantage"). Research by IS3C has shown that public-private partnerships can play a crucial role in the creation of transparancy and awareness which is crucial to reach critical mass.

Calls to Action

1. To governments and TLD registry operators: Monitor the usage of modern internet security standards (such as IPv6, DNSSEC and RPKI) in the public sector and in society. For this, they can make use of open source tools such as https://Internet.nl and even extend it (eg tests for Universal Acceptance and for accessibility). Such tooling provides transparancy, helps end-users articulate their demand, and creates an incentive for vendors to comply.

,

2. To governments and industries: Publish procurement policies regarding modern internet security standards. These can be reused by others when creating procurement policies. Furthermore vendors could use these as requirements for their software and systems. The list with most important internet security standards that was created by IS3C (https://is3coalition.org/) can be used as a reference (consultation untill  5 Nov 2023).

Session Report

Moderator Olaf Kolkman introduced this Open Forum by elaborating on the role of modern security standards in securing the internet. He emphasized that we need to secure the internet for the common good. One of the challenges that comes with securing the internet is the slow adoption of security standards. Therefore, this Open Forum highlights tools that enhance the adoption of modern security standards.

The Role of Open Standards particularly in procurement, experiences in the Netherlands

Modern internet standards (such as IPv6, DNSSEC, HTTPS, DMARC, DANE and RPKI) are essential for an open, secure and resilient Internet that serves as a driver of social progress and economic growth. Gerben Klein Baltink and Annemieke Toersen explained the role of standards in procurement and their experiences in the Netherlands. The role of open standards in promoting a safer, more secure, and well-connected internet has become increasingly recognized, with initiatives like the internet.nl test tool which contribute significantly to this progress. The tool is primarily aimed at organizations, attracting both technical personnel and board members, and allows them to assess if their mail, website, and local connections comply with established standards.

In the procurement and supply chain management domain, the Forum Standaardisatie think tank has been actively promoting the use of open standards, advocating for more interoperability. With 25 members from government, businesses and science, the forum advises governments on the adoption of open standards, emphasizing their importance in promoting information exchange, ensuring interoperability, security, accessibility and vendor neutrality.

The Dutch government has pursued a three-fold strategy to promote open standards. Firstly, through the implementation of a "comply or explain" list of 40 open standards, carefully researched and consultated with experts. This have led to increased adoption, particularly in areas such as internet and security, document management and administrative processes, like e-invoicing. Government entities are mandated to use these standards, with required reporting if not followed.

Secondly, the government has fostered national and international cooperation, facilitating workshops on modern email security standards within the EU, and engaging with prominent vendors and hosting companies such as Cisco, Microsoft, and Google. They have also facilitated the reuse of internet.nl code in various projects, such as aucheck.com and top.nic.br.

Finally, the Dutch government actively monitors the adoption of open standards, evaluating tenders and procurement documents, and ensuring that the standards are included. Reports are submitted to the government, and efforts are made to support and guide vendors who may  lagging behind in the adoption of these standards.

Lessons learned from these efforts emphasize the importance of consistently checking for open standards in procurement processes and providing guidance and support to encourage their usage. The comprehensive approach taken by the Dutch government, along with collaborations with various stakeholders, has contributed significantly to the wider adoption and implementation of open standards, fostering a more secure and interconnected digital environment.

Procurement and Supply Chain Management and the Business Case

Wout de Natris and Mallory Knodel elaborated on the role of the Internet Standards, Security, and Safety dynamic coalition in enhancing internet security and safety through various initiatives. The coalition has established three working groups targeting Security by design on the Internet of Things, Education and Skills, Procurement and Supply Chain Management and the Business Case, aiming to contribute to a more secure online environment.

Their ongoing projects involve the deployment of DNSSEC and RPKI, exploring emerging technologies, and addressing data governance and privacy issues. They strive to persuade decision-makers to invest in secure internet standards by developing a persuasive narrative incorporating political, economic, social, and security arguments. The Procurement and Supply Chain Management and the Business Case working group have released a comprehensive report comparing global procurement policies, shedding light on existing practices and advocating for more transparent and secure procurement processes.

The coalition highlights the need for greater recognition and integration of open internet standards into government policies, emphasizing the importance of universal adoption of standards for data protection, network and infrastructure security, website and application security, and communication security. They aim to provide decision-makers and procurement officers with a practical tool that includes a list of urgent internet standards to guide their decision-making and procurement processes.

By focusing on streamlining and expediting the validation process for open internet standards in public procurement, the coalition seeks to enhance procurement policies, resulting in more secure and reliable digital infrastructure. Overall, their collaborative efforts and initiatives aim to create a safer online landscape for individuals, organizations, and governments by promoting the secure design and deployment of internet standards and advocating for the adoption of open internet standards in government policies.

The report from is3coalition.org highlights a concerning trend where governments fail to recognize the critical components that enable the internet to function effectively. This issue has been a recurring question in various research endeavors, prompting the Working Group (WG) to prioritize and compile existing security-related internet standards and best practices in the field of ICT.

Best practice awards go to: the GDPR in the European Union provides common understanding and harmonization with regards to the security of information systems; the Dutch Ministry of the Interior and Kingdom Relations makes mandatory standards deployment. The ‘Pas toe of leg uit’-Lijst (comply-or-explain list) of the Dutch Standardisation Forum is a document containing 43 open standards that all governments in the Netherlands have to demand when procuring ICT; and Internet.nl: the tool used to track standards adoption by an organization’s website based on three indicators: website, email and connection. The software has been adopted in Australia, Brazil, Denmark and Singapore.

IS3C provides decision-takers and procurement officers involved in ICTs procurement with a list containing the most urgent internet standards and related best practices. This assists them to take into account internet security and safety requirements and procure secure by design ICT products, services and devices, making their organizations as a whole more secure and safer. By raising awareness and emphasizing the significance of internet security and safety requirements, the report seeks to prompt officials to consider and integrate these crucial standards into their operational frameworks.

To gather insights and perspectives on this critical issue, the coalition is conducting a consultation on the report until November 5th at 10:00 UTC. This consultation aims to engage stakeholders and experts to discuss and address the challenges associated with the recognition and implementation of internet security standards by governments.

Report: https://is3coalition.org/docs/is3c-working-group-5-report-and-list/

Perspectives from India

There are many examples of good efforts and effective tools enhancing internet security. One of these examples comes from India. Mr. Satish Babu highlighted that the Trusted Internet India Initiative was initially established at the India School of Internet Governance (inSIG) in 2016 and has since 2018 been collaborating with the Global Forum for Cyber Expertise.

InSIG organized GFCE’s Internet Infrastructure Initiative (Triple-I) Workshop in 2018, 2019, 2022 and 2023 as Day 0 events of inSIG. The Triple-I workshop seeks to “...enhance justified trust in the Internet” by building awareness and capacity on Internet-related international standards, norms and best practices. In its 2023 edition, the Triple-I workshop announced a new initiative that attempts to measure periodically the compliance of Indian websites, DNS and email services to modern security standards (to begin in 2024).

During the T3I workshop, it was emphasized that digital technology plays a crucial role in fostering India’s growth. The digital public infrastructure, which serves over a billion citizens, facilitates applications related to financial health, logistics, and more. However, the workshop shed light on the existing weak levels of compliance within these systems. In response to this observation, volunteers associated with T3I conducted extensive research to identify areas of improvement.

Building on their research findings, the initiative now plans to conduct comprehensive testing and disseminate the results to all stakeholders. The aim of this effort is to enhance compliance levels across Indian digital platforms, ensuring that they meet modern security standards and contribute to a safer and more secure digital environment. 

Perspectives from Brasil

Mr. Flavio Kenji Yanai andGilberto Zorello shared their experiences from a Brazilian perspective. The Brazilian Network Information Center (NIC.br) is a non-profit civil entity that since 2005 has been assigned with the administrative and operational functions related to the .br domain. NIC.br is actively investing in various actions and programs to improve internet services across different sectors. Their initiatives are geared towards disseminating knowledge and best practices, contributing to a safer and more secure internet environment in the country.

A key project they are currently undertaking is the TOP Teste os Padrões (Test the Standards) tool, which was initiated in December 2021 and utilizes Internet.nl provided by the Dutch government. As part of the Safer Internet program, their objectives include providing support to the internet technical community. This involves collaborating with various groups to develop technical teaching materials and promote good practices aimed at raising awareness within the technical community. Their efforts have yielded positive results, as statistics indicate a reduction in misconfigured IP addresses.

Furthermore, they have implemented the Mutually Agreed Norms for Routing Security (MANRRS) in Brazil, leading to a notable increase in the number of participants. The statistics reflect continuous improvements in various aspects of internet security within the country. With significant incumbents responsible for approximately 50% of the internet traffic in Brazil, the implementation of version 1.7 of internet.nl, currently in the validation phase, has been instrumental. The tool is being widely disseminated in conjunction with the Program for a Safer Internet, with government entities also starting to utilize it to test their websites and email services. The TOP tool has proven to be of immense value in fortifying the internet infrastructure in Brazil.

IGF 2023 WS #109 The Internet in 20 Years Time: Avoiding Fragmentation

Updated: Fri, 01/12/2023 - 13:02
Avoiding Internet Fragmentation
Key Takeaways:

There is a level of Internet Fragmentation today, which manifests at a technical, regulatory and political level. There is a chance, however, to act upon present and future fragmentation and design the future Internet we want. We should consider incentives (where economics has played a central role), think how to find convergence, design a future Internet for people, and be ready for this debate to be impacted by geopolitics and climate crisis.

,

To get to a best case, future scenario, we should take an incremental, interactive approach to devising solutions (including regulation); our actions should have a compass and be principles-based (openness and permissionless innovation emerged as central guiding principles); strive for inclusivity in governance and standards, take guidance from human rights frameworks and engage actively in difficult areas where there is tension or “chaos.”

Session Report

This workshop proposed to discuss Intern Fragmentation through a forward looking exercise. The session opened with the moderator inviting the panel and audience to think of the Internet in 2043, what good would look like, and what it would take us to fulfil the hoped-for future we want.

The panellists started off by sharing their thoughts on what it entails imagining the future, based on past experience. 

Olaf Kolkman from the Internet Society highlighted it is hard to predict the future and what technologies would triumph, exemplifying with his erroneous prediction that webpages would not go beyond academic libraries. Sheetal Kumar from Global Partners Digital spoke about ubiquity of smartphones and connectivity as a crucial development and in looking to the future, encouraged the audience to think about what we want the Internet to feel like; she believes the internet will continue to grown in embeddedness and finds that how the internet will evolve will depend on what we Internet we choose to create. French Ambassador for Digital Affairs, Henri Verdier —who created his first web-based company in the 90s— shared a story about how he erroneously predicted that Wikipedia would fail to take off. Professor Izumi Aizu from Tama University mentioned that we are oftentimes overly optimistic of the future, which in reality may be composed of different shades and colours. The future is bound to surprise us with unpredictable events like Fukushima or the unfolding conflict in Gaza. Lorraine Porciuncula from the Datasphere Initiative spoke of being a digital native, and the optimism felt during the Arab spring. She recalled the sense of opportunity and “capability” brought by technology. Time showed that there are good and bad aspects to technology, yet she encouraged the audience to reconnect with a sense of optimism. 

The moderator introduced the discussion paper submitted as part of the session (https://dnsrf.org/blog/the-internet-in-20-years-time-what-we-should-hav…) which lays out there potential future scenarios:

  • Scenario 1: Continued Status Quo. In the first scenario, we muddled along, continue the current course of action and end up with an internet that continues in its present trajectory with some signs of fragmentation;
  • Scenario 2:  Fully Fragmented Internet. the second scenario is one of complete fragmentation, either divided at the technical layers, at ideological layers or regulatory layers or all three; 
  • Scenario 3: Strengthened, non-fragmented Internet. The third scenario is one of a bright future where we get our act together.

The moderator invited the panel and audience to comment on what they see as the most likely future and why, and at what layer they see the most risk.

Olaf said that in reading the scenarios, he was struck about how the future is already here. Many of the things described in the scenarios —such as drivers for the fragmentation of the technical layers of the Internet, are already happening, and if they take off, they will splinter the internet. He explained that the value he sees in the Internet lies in its openness, the scientific method of sharing knowledge, and to be able to probe, query and scrutinise one another. He commented in particular on scenario 1, where we see a mix of closed networks coexisting with the Internet. This is about being proprietary, about the Internet being closed, about the Internet developing services that people pay for, where people connect to servers to access specific services, and the interconnectivity is less important. This is an entirely different notion from the Internet that exists to connect us to the rest of the world, where we get to choose services. To Olaf, openness is a best case scenario, where the richness of the Internet really lies. 

The moderator took a round of early comments from the audience. 

  • Barry Leiba said that what has driven the evolution of the Internet is the innovation in applications and services. He therefore thinks that a great idea for an application (perhaps yet to come) is what will drive the Internet of tomorrow, including another set of standards and technologies. He highlighted the role of standards in shaping the way we will experience technology. 
  • Andrew Campling stated that we are also at an inflection point. Up to now, the Internet was seen as a force for good. He finds we are now at the point where the balance is shifting to the Internet becoming a source for harm with the rise of disinformation and CSAM. Adding to the point of standards, he urged for standards development organisations (SDOs) to become more diverse.
  • Michael Nelson from the Carnegie Endowment for International Peace came in next. He taught a class about the internet future(s), where he highlighted to his students that the best way to understand what is coming in terms of technology, is not to understand what the technology can do, or what governments want it to not do, but rather to look at what the users want. So we should ask ourselves, what will drive companies and governments to do better? He concluded by saying “I am a technology positivist but political negativist.” 

The moderator returned to the panellists. Izumi described the first scenarios of mixed networks co-existing with the Internet as a scenario of chaos. He consulted a number of AI tools on the subject of the panel and shared the findings with the audience. Chat GPT said that, while there is fragmentation due to economic and political reasons, the ethos of the Internet as a tool for global communication will likely persist. Bard was even more optimistic and said the Internet might become even more unified. He challenged the audience to think not of a better internet, not for the sake of the Internet itself, but for the sake of a better society, which is a different perspective on how to understand the Internet. 

Lorraine, on the other hand, said that in her view, we will not have an issue of fragmentation around the Internet’s technical layers, but we will have a very concrete challenge on the regulatory side. This issue is reflective of not only the fragmentation of the Internet, but of the fragmentation of society. She urged the audience to consider “how are we (as societies) going to get along? What are the incentives?”  Regulators will regulate what they are scared off: they want to control national security, democratic processes, content, and so on. So when taking of regulatory-driven fragmentation, the question becomes “How will we work to find convergence?” 

 Ambassador Verdier said that he is uncertain what scenario will materialise, but that he knows what we should fight for. We know what the Internet brought us in terms of possibilities. Now there is great centralisation, if you look for example at submarine cables. He finds that big tech does not care for decentralised internet, and that “we need to fight for that interconnected, free, decentralised internet.” He also reflected on John Perry Barlow’s notion of Cyberspace (https://www.eff.org/cyberspace-independence), where the Internet felt like it was somewhere far off in “cyberspace”. Now the digital is embedded in all aspects of life: education, health, and even war and peace. He finds that the fragmentation of the technical layer would be an extremely bad scenario, as now interdependence holds it all together. If the internet were to fully fragment, the temptation to disconnect each other’s internet would be very high, war would be waged on infrastructure itself. So far we have cyberwarfare, but no attempts to disconnect internets. Beyond the technical layer, there is a political and legal layer. From a legal point of view, he sees it would be better to have regulatory convergence but if you believe in democracy, you need to respect regulatory proposals that are reflective of local prerogatives, as is the case in France. 

Sheetal came in next and said she finds that we have the capacity to build and design our own future, even though there are power asymmetries to be aware of. She picked up on the notion of how the Internet of the future should feel: it should feel liberating, especially to those who do not occupy those positions of power. She hopes for a future Internet that does not reflect the inequalities of our society. This will require that those who build the technologies and develop the standards, open up spaces to those communities affected by technology developments. In terms of what we should do, she highlighted “we know exactly what we need to do, we just don’t do it at the moment.” There are many useful tools and guidance on how to build a better, human-rights-respecting Internet. We should utilise and leverage those in shaping the Internet of tomorrow.

The audience came in with a new round of comments:

  • Web 3 and money. Georgia Osborn picked up on money being a huge incentive on the Internet, and currently money being a massive driver for the development of blockchain technologies, Web 3.0, alternative naming systems, and cryptocurrencies. She asked the panel to reflect on whether those forces are bound to further fragment the Internet, or not.
  • Interoperable laws. Steve del Bianco from NetChoice highlighted the impact of fragmentation through regulation, and stated that regulation is the main challenge we will confront, one that is already unfolding. There appears to be no cost associated or consequences for governments, particularly authoritarian governments that want to control what their citizens see. He highlighted how IGF 2023 was largely about AI, but not about collaboration. “We have been hearing competing views about how it should be regulated and where it needs to go. That is not going to work transnationally.” He encouraged the audience to think of ways of documenting the cost of fragmentation, and raising the “pain level” for bad regulatory proposals.
  • Bertrand Le Chapelle from the Internet and Jurisdiction Network also spoke about legal interoperability. He said that fragmentation is not driven by technical objectives but by politics. The legal fragmentation is a reflection of the international political system, which today is heavily influenced by notions of national sovereignty. The legal fragmentation is what prevents us from dealing with online abuse in many cases. The framework for accessing electronic evidence is non-existing or insufficient. He agreed with Ambassador Verdier that countries have a “democratic freedom/capacity” to do what they deem right for their citizens, but if we want to preserve interoperability we need to reduce the friction at the legal level. He also thinks we need to have heterogeneous governance frameworks that allow the coexistence of government regulation, company’s self regulation, and other frameworks that operate independently yet are able to speak to and with one another.
  • Involvement of the global south and regions with ideological disagreement. Nikki Colosso from Roadblocks came in next. She  pointed out how  a lot of conversation in IGF 2022 dealt with incorporating the global south and inclusivity. She asked the panel what specific steps companies and civil society can take to involve users from countries that are not represented in these conversations or those from countries where there are differences from a geopolitical perspective.
  • Digital Colonialism. Jerel James picked up on the issue of profit as an incentive. Money is how power gets flexed on certain communities. He asked about digital colonialism and how it may be sanctioned. As we see antitrust regulation for monopolies exists in our traditional finance system, he asked whether there are possibilities to sanction resource extraction by big tech as a means to stop digital colonialism.
  • Bad behaviour in the online realm. Jennifer Bramlet from the UN Security Council spoke next. She focuses on how bad actors exploit ICTs for terrorism, including use to recruit and radicalise individuals. They look at what is considered unlawful and harmful language across jurisdictions, from a regulatory perspective. Looking to the future, they are concerned about crime and terrorist activity in the metaverse, and how it may be tackled going forward when regulation hasn’t quite yet caught up with online criminal challenges we see today.  Her question to the panel was how do you deal with bad behaviour in the online realm.
  • Call not to lose sight of the social value of the Internet. Vittorio Bertola came next. He believes Europe is producing regulation precisely to preserve the global nature of the Internet, not to break it. Also, if the future of the internet is decided by what people want from it, people want entertainment and social media attention. If we focus on that we lose sight of the social purpose of the technology. Just doing things because we can or because money is not enough.

Ambassador Verdier responded first by saying he shares the aspiration by Bertrand of interoperable legislation. But while we can work on making progress in that direction, we are not there yet. France is fighting for regulation of big tech, which they see as a private place built on the internet. In his view, “you can do that and still protect the global internet.” 

Sheetal elaborated on what we can do. On legal fragmentation, she expressed there is need for harmonisation. She finds we have human rights standards to guide us, we have the rule of law and our institutions. We can use those human rights standards and guidance for shaping the online space. She also seconded the need to protect the openness of the Internet and the ability to build your own apps and technology. She also supported the need to protect the critical properties of the internet, and how that comes hand in hand with the need to make standards bodies more inclusive. She also encouraged all participants to take the conversation home, to ensure that we are vocalising the values we want to be reflected on the Internet of tomorrow, and ensuring that those get executed. She concluded with an invitation: “Let's not be nostalgic, let’s look forward.” That requires giving users control, and not letting governments or companies determine what the future is about.  

Izumi reacted to Vittorio and Bertrand. He agreed that the future of the Internet depends on the wills of people/users, and that it also depends on legal frameworks. He wanted to add additional dimensions to consider, two factors that are unknown: climate and politics. We may get together hosted by the UN in 20 years time, independently of how politics plays out, who wins what war. Climate change, however, is an existential threat, we may think it is an external factor to the internet, but may well shape the future of the Internet, it may even lead to war. In the 1940s, we killed each other a lot. We then had the Cold War, and then came the Internet. Perhaps the timing was right, as the East and West were open to coming closer together. That political will is what allowed the Internet to get picked up. China wanted to have technology and science, that is why China accepted the Internet, to have growth and innovation and technology. Now China and India have reached the point where they do not need the West anymore. He concluded by inviting us to think of not the Internet of the future. The question has to be how the present and future will offer something better for society.

Lorraine picked up on notions of what the Internet can do for people. She highlighted that narratives matter, so it is not about the Internet, but about the digital society. Now, when we reflect on ”what is our vision for the Internet? what do we want the Internet to feel like?” she finds that we do not have a clear, shared vision. If the issue were walled gardens, we could use tools for antitrust and competition for users to move to other platforms. But the truth is that with the Internet, one government can’t fix it all, so it’s all about governance. We need to focus on asking ourselves “how do we cooperate? how do we govern? What are our economic and social objectives?”

Olaf concluded by explaining that not having infrastructure at all is the ultimate fragmentation. Empowered communities is the way forward, like IXPs, communities networks, that is truly bottom. He also added thoughts on standardisation. When you talk about economics and standardisation, standardisation is to a large extent industry driven and industry politics; we need to put that on the table and understand it. With economics, consolidation happens, even if you have open technologies, companies will try to extract money from using those open technologies. And you will have an accumulation of power to the point governments might say this is too much, and want to regulate it. But we need to remember you don’t need standards for every innovation. The founder of blockchain did permissionless innovation, open innovation (he did not innovate via standards making bodies). Innovation happens today, not just in standards organisations. If you ask me from a technical perspective, where to go in the future I say: Open architecture, so that people build on the work of others, open code so that it can be reused, and open standards.  

There was a last round of comments from the audience:

  • Yug Desai, ISOC Youth Ambassador, thinks in 20 years from now we will have fragmentation, not by design, but by default due capacity gaps. He finds the standards are unable to keep up with the pace of innovation, and not sufficiently inclusive of the users.
  • Mark Dattysgeld highlighted the importance of open source and the role of research driving AI. He said we should ask ourselves whether that is the new paradigm that takes things forward. This point was reinforced by Lucien Taylor on the example of TCP/IP.

The session wrapped with final recommendations from the panel about what to do next:

Raul Echeberria from ALAI finds we already have a level of internet fragmentation, and we need to live with that. The incentives of policy makers are diverse, and not always driven by the search for the best outcomes for all. Our mission has to be protecting the Internet. In terms of what to do, his proposal is to go for “gradual objectives and commitments, instead of going for the whole packet.”  In sum, he suggests an incremental approach.  He also said that in speaking to policy-makers, we need to make our messages sharper and clearer, and better outline what governments should not do. Lastly, he shared he recently participated in a discussion with parliamentarians, all of whom were over 50 years old. They spoke about fears, but it is important we do not develop policies based on fear, and let’s not let fear stop evolution. 

Lorraine reiterated the points we heard so far – being clear on what the objectives are, being incremental– and added being iterative. There is no ultimate regulation that will get it right, so we need to test stuff and iterate. The system is hard to predict and it moves fast. We need processes and institutions that are more agile. Like in software development, we need to identify the bug, and have multi-stakeholder conversations to address them. True multi-stakeholderism works when it seeks to be inclusive in an intentional way, particularly of communities that are underrepresented.

Ambassador Verdier added he thinks we can agree on a compass. In his view, we should stand for 3 aspects of the Internet’s golden age: unprecedented openness and access to information, which to date has not been fully accomplished as we still have a digital divide; unprecedented empowerment of communities and people; and permissionless innovation. He reiterated that fragmentation can come from the private sector, not just rogue states.

Olaf emphasised the point of the compass, saying our work needs to be principles-based. We need to make a differentiation between evolution OF the internet and evolution ON the Internet. We can get to those shared principles if we talk of the evolution OF the Internet. When we talk about empowerment, individualism, autonomy ON the Internet it gets more complicated to arrive at shared principles.

Sheetal added we need to assess how governments do regulation, and how companies operate from a human rights perspective. Are they human rights respecting, is there accountability, transparency? Are our governance and standards body inclusive? She summarised her points as protecting critical properties as they evolve, adopting a principles based approach, building on the human rights framework, and creating more inclusive spaces.

Lastly, Izumi highlighted that there were no Chinese or Indian representatives in the high-level session on AI, which to him is telling of the level of fragmentation that already exists. It wasn’t like that 18 years ago, we have fears. He encouraged the audience to go out into the world of chaos, to engage where there is tension, to think outside the box.

IGF 2023 Town Hall #105 Resilient and Responsible AI

Updated: Sat, 25/11/2023 - 07:17
Sustainability & Environment
Key Takeaways:

Considering the situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes, we need to establish the discussion of resilient and responsible AI. We propose that a large complex system should be capable of maintaining/improving the value enjoyed by humans through the system in response to various changes inside/outside the system

,

In order to achieve system resilience in a human-centric way by letting humans make and embody their own value judgements, an interorganizational and agile governance mechanism is needed.

Calls to Action

The points presented above require urgent discussion and action under an international and comprehensive framework.

,

A broad outreach to the people including the general public is also needed.

Session Report

At the beginning of the session, Dr. Arisa Ema (The University of Tokyo), one of the organizers, explained the purpose of the session. The aim of this session is to expand the concept of "Responsible AI,” which is an important topic of AI governance, to "Resilient and Responsible AI" by considering the possibility of situations including crises where dynamic interactions between multiple AI systems, physical systems, and humans across a wide range of domains may lead to unpredictable outcomes.

First, Carly and Yui who are the pilots (operators of an avatar-robot) of OriHime (an avatar robot) talked about their experiences from the user's viewpoint of the technology. They have been in wheelchairs and feel the value of participating in society through the avatar robots. On the other hand, they have encountered situations where they could not handle irregulars because of the overreliance on technology.  Carly shared the experience that he was unable to turn on the power switchboard by himself and loss of communication with outside, when a power failure occurred by a lightning strike while working at home.  Yui talked about the anxiety and unnecessary apologies that people who need assistance face in a social system that is becoming increasingly automated. In a technology-driven society, where manuals are available but not always put into practice, they realized that this assumption would be broken not only in ordinary times but also in times of disaster, and that she would have to rely on people. The common conclusions of both stories, that is, the balance between technology and the manpower is important and that it should be considered that sometimes technology does not work, is suggestive. Furthermore, it made us realize that the nature of the crisis can be diverse for a diverse society. Next, Dr. Hiroaki Kitano (Sony), a researcher and executive of a technology company, who is currently working on an AI project for scientific discovery, pointed out that such an AI brings some positive effects for human being, but it also has a risk by misuse. Then, he also highlighted the possibility of future large-scale earthquakes in Japan and the importance of how to avoid excessive reliance on AI. There is a risk that AI will not be available unless communication networks, stable power and PC/mobile devices are available in accidents such as large-scale power outage when the dependency of AI in society is increased.

The organizers and three panelists, Dr. Inma Martinez (Global Partnership on AI), Ms. Rebecca Finlay (Partnership on AI), and Dr. David Leslie (The Alan Turing Institute), led the discussion based on the issues raised by OriHime pilots and Dr. Kitano. Dr. Martinez mentioned the necessity of defining resilience, and emphasized that the power of technology should be rooted in the values we have learned from our families and national cultures. By doing so, empowerment can create resilience. Ms. Finlay pointed out that while the assessments of AI systems before the launch are discussed, attention is hardly paid to how they affect different communities after they are released. The resilience and control methods are always required throughout the life cycle of AI, i.e., during the research phase, before and after launch. Focusing on machine-learning which has been the mainstream of AI in recent years, Dr. Leslie pointed out that data-driven systems may become vulnerable in a dynamic environment. As society and culture are gradually change, machine learning based systems driven by past data has the limitations. He emphasized the importance of considering resilience because excessive reliance on data driven systems has possibility to lead to stagnation in human creativity. In response to these discussions, Dr. Ema pointed out that we need to consider how technological and social perspectives on the current discussions such as generative AI will change. The following three points were pointed out by the audience.

  • The need for society to provide people with options for solutions.
  • The need for a more comprehensive impact assessment (technology, ethics, human rights, etc.) 
  • The risk of forgetting skills due to dependence on technology.

Then, a participant was asked about AI as a critical infrastructure. In response to this question, at first, Dr. Martinez said that AI is an infrastructure-based service, and it creates an unknown area for society. She mentioned the resilience of the communication infrastructure in which she was involved, and introduced an example in which a specific band continues to operate even if the whole network goes down in a disaster. She also pointed out the necessity of considering the self-repair mechanism of AI in the event of an infrastructural outage, and how to build not only systems but also human resilience. Ms. Finlay touched on the possibility that AI can be introduced in various ways with various implications, in response to Dr. Martinez. And she pointed out that systems need multiple layers of resilience. The way to understand how AI interact in a system is to map the system and understand its effects. Dr. Leslie pointed out that AI is rapidly becoming an infrastructure and general-purpose technology, and that it functions as an alternative for humans to think and act. AI is becoming a kind of a utility, but if it becomes an infrastructure, the question is who should control it. Dr. Ema said that it is difficult for individual companies to be held accountable when AI become infrastructural and go beyond the scope of a company, and that governmental and global discussions will be required.

As a summary of the discussion, the panelists highlighted the need for AI to be safe and have a solid foundation for society. They also emphasized the importance of defining and monitoring resilience to support society. In addition, they agreed the necessity of international research institutions to discuss AI from scientific and technological perspectives against the rapid commercialization of AI. In response to these comments, Dr. Ema concluded this discussion with the hope that all of us will work together to realize a resilient and responsible AI. The session received a variety of comments. A participant from public sector appreciated the uniqueness of the theme and the importance of discussion. On the other hand, another participant pointed out practical aspects such as how to handle large and complex systems composed by multiple AI systems. It is important to continue the discussion on this topic.

 

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

Updated: Tue, 07/11/2023 - 20:16
AI & Emerging Technologies
Key Takeaways:
When using IoT devices and services, strong identification becomes key to protect these from tampering. This identification may be between devices, for instance those that together provide a service, or form together a so-called “cyber physical system” such as a car, a house, an airplane, etc. When this identification is between people and devices, there needs to be sufficient measures in place to ensure privacy by default.,

With the ongoing growth of IoT deployment throughout our world, scaling issues are important to consider. Going forward to design imperatives need to be taken on board: (1) security by design - every device needs to be protectable (and updatable when needed); and (2) every device needs to be as carbon neutral as possible (as there will be many, including those that are dependent on power).

Calls to Action

Require appropriate security measures for IoT devices that can be handled by those that use them, and ensure appropriate labeling (dynamic for those devices that are software updatable) to make it possible for user to assess the risks and take the necessary measures.

,

Set global standards for this, as it concerns devices that are developed all over the world, and are deployed all over the world. National/regional initiatives will need to take global good practice into account.

Session Report

IGF 2023 DC-IoT Progressing Global Good Practice for the Internet of Things

The session considered IoT governance from various perspectives. To understand baseline IoT evolution, associated challenges, opportunities and responses, the IoT could best be understood as an internet of data, devices, systems or functions. For simplicity, we can call these “Internets of X” (IoX). Each perspective brings its understanding of what is possible, desirable or undesirable and tools and processes needed for governance.

Each approach must be considered in its own terms, but they start from a common base of experience and must ultimately come together to provide good governance. This leads to the need for an ecosystem comprising of stakeholders such as technical experts, governments, service providers, manufacturers, users, standards bodies, military vs civilian organisations, etc., varying in global and regional perspectives.

One immediate consequence is that IoT governance must respect a range of perspectives. Our fundamental principles are unlikely to be universal, especially when applied to specific IoT contexts. By analogy with the sensors and actuators of the IoT itself, governance needs to ‘sense’ the interests and perspectives of all significantly affected parties and somehow balance them to inform decisions at various levels. In other words, it requires multistakeholderism. It is not that specific expert groups (e.g., engineers) are insensitive to the needs of others (e.g., end users) but that they may misunderstand their interests, capabilities and behaviour.

The session began with a consideration of simple and recognisable use cases in which major challenges can already be seen (though they will become more complex). IoX components and their complex or hybrid assemblages will and should interact with others, so they must be identified uniquely and discovered with appropriate levels of precision, reliability, and permanence and be capable of enrolment in or separation from IoX systems. The concept of ‘identity’ has some subtlety. For instance, a smart home must be able to recognise and be recognised by new IoT components added to the system on a permanent or temporary basis, accorded the right kinds of access and privileges and tracked or remembered appropriately. These identities enable necessary functions, including the granting of trust. But they need not be unique, durable or universal. Indeed, categorical or shared identities (e.g., type certification) may be more practicable, scalable, flexible, future-proof, secure and robust to, e.g., (hardware, software or data) updates and interconnection or federation to create identifiable hybrid systems. Three subtleties linked to identity that came up in the discussion were security (including but not limited to cybersecurity), privacy (including but not limited to data privacy) and ownership (including protections against identity theft or misuse and, conversely, the use of identity to carry liability or responsibility).

Various identity schemes were discussed, ranging from central registries of semi-permanent discrete identities (along the lines of the DNS model) to purely transactional or temporary mutual authentication and identification schemes. These have advantages and drawbacks ranging from theoretical to practical, including technical, legal, commercial, security and other considerations. No single approach seemed to fit all foreseeable circumstances. In placing these in context, the panel recognised that the same concepts applied to the human beings (and organisations) that create, operate and use the IoX. For example, a person is more important than devices or data attributed to him/her, and human rights and responsibilities (e.g., of association and expression) cannot safely be extended to, say, their smart digital assistants. This cuts two ways; it may not be useful to hold a human being accountable for what their devices do in response to interactions with other systems, which the ‘user’ may not even perceive, let alone understand or control. Conversely, the automation of routine functions may result in their receiving less considered and responsible human attention, with unintended, undesirable and possibly irreversible results.

The discussion also considered desirable properties that might provide an ethical framework for IoT governance. Many are familiar, e.g., interoperability, transparency and accountability, robustness, resilience, trustworthiness, user empowerment, privacy and security. They are not IoT-specific but may need to be reinterpreted in that context. For example, IoT devices can harvest a wide range of data almost invisibly, which creates general privacy and security risks and affects global development, e.g., via ‘data colonialism’ whereby devices originating in and provisioned by the global north can be used to capture data from users in the global south to produce innovations for the benefit of the north and to lock in users in the south in ways that inhibit their techno-societal development.

One desideratum came up in relation to technologies, service provision, use cases, data issues, labelling and certification schemes and legal frameworks, and scalability. This is a generic issue, but the panel highlighted aspects that stand out clearly in the IoT context. One is complexity; as systems scale quantitatively, their qualitative properties may change and, with them, the appropriate kind of governance. Rules may need to be more general, neutral, principles- or function-based. Alternatively, governance may need to move between the data, device, software, etc., planes as systems interconnect in larger and more diverse ways. Another is practicability; effective governance may require limits on scale or interoperability. A further aspect is Quality of Service (QoS). The IoT-specific emphasis on low latency can constrain system scale, security or flexibility. Beyond this, QoS considerations may lead to multi-tier systems, which may reduce economic welfare, hinder interoperability or distort innovation. Large-scale systems may also be more susceptible to intentional or accidental compromise; effective access control in large environments may lead to inappropriate inclusions or exclusions. Under laissez-faire evolution, IoT systems may reach stable sizes and configurations, but these may not be optimal. Finally, very large systems may be difficult to govern with national or self-regulatory arrangements. For example, identification and certification schemes that identify individual devices or types scale with their number but cannot identify even pairwise interactions (which scale as the square of the number of interacting entities). As scale increases, management overloads, costs increase, and utility and use eventually decline. This, however, depends on the governance architecture; a centralised system (analogous to the cloud) offers economies of scale (or diseconomies) and a natural platform for observing systemic behaviour and emergent threats (if not weak signals). However, it creates additional power asymmetries and vulnerabilities; no one governance architecture will likely fit all cases. The group also mentioned other aspects of scale, such as environmental impact.

Another aspect that ran through the various phases of the discussion was trust and trustworthiness; beyond the customary discussion of e-trust, the panel contrasted high-trust and Zero-trust approaches to the problems of identification and interoperability.

The issue of AI in the IoT comes up often but not in depth. The panel recognised that it complicated the IoT, especially when considering smart devices and the emergent intelligence of connected systems. Foreseeability and explicability were discussed, as was the possibility that data-driven systems might be particularly vulnerable to noisy or biased data.

The panel considered various legal approaches and the ‘regulatory game’ being played out among countries, industries and civil society groups. Governance competition could spur the development of innovative and effective standards if different approaches can be compared and a suitable global standard emerges through a kind of ‘Brussels Effect’. This seems more promising than a too-rapid imposition of global standards and regulations whose implications cannot be foreseen. However, this result is not guaranteed; we could see damaging fragmentation or a rich diversity of approaches matching different contexts. Research on policy initiatives in 40 countries around the world shows that governments often do not regard modern global open source standards and global good practices with security at the core as “important”. It was suggested that governments could lead the way by taking such standards actively on board in their procurement activities. Keeping the discussion going and actively engaging with other DCs guarantees a positive outcome and an increased understanding of good global practices in IoT governance. Three important takeaways:


·       

IoT data, especially AI-enhanced, should be understandable, accessible, interoperable, reusable, up-to-date and clear regarding provenance, quality and potential bias.


·       

At the level of devices, there need to be robust mechanisms for finding, labelling, authenticating and trusting devices (and classes of devices). These should survive retraining, replacement or updating but be removable when necessary for functional, security or privacy reasons. To ensure IoT functionality, trustworthiness and resilience, market information and incentives should be aligned. Labels provide a powerful tool; many countries have developed and adopted IoT trust marks, and the time has come to start working towards their international harmonisation.


·       

Functions are not all confined to single devices, designed in or provided by system integrators; they can also be discovered by end-users or emerge from complex system interactions in cyber-physical systems (CPS) and IoT-enabled services. Governance requires methods for recognising, protecting and controlling these functions and their impacts.

-=O=-

IGF 2023 DCNN (Un)Fair Share and Zero Rating: Who Pays for the Internet?

Updated: Mon, 06/11/2023 - 15:00
Avoiding Internet Fragmentation
Key Takeaways:

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks. The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

Calls to Action

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network. Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

Session Report

 

The purpose of this session was to explore the so called “fair share” debate, which is rising in popularity especially in the European Union and South Korea and moving rapidly to Latin America. The session also discussed the connection between fair share and zero rating schemes, especially popular in the countries of the Global South.

 

The session adopted an evidence-based approach, featuring multiple stakeholder perspectives, discussing to what extent fair share and zero rating can be beneficial for the internet economy and to whether they contribute positively or negatively to the sustainability of the internet ecosystem.

 

Furthermore, panelists will explore how these two core issues connect with broader debate on Internet openness vs Internet fragmentation. The session was structured according to the following agenda:

 

Brief intro by Luca Belli, Professor and Coordinator CTS-FGV (5 min)

 

First slot of presentations (6 or 7 minutes each)  

  • Artur Coimbra, Member of the Board of ANATEL, Brazil  
  • Camila Leite, Brazilian Consumers Association (IDEC)
  • Jean Jaques Sahel, Asia-Pacific Information policy lead and Global telecom policy lead, Google  
  • KS Park, Professor, University of Korea
     

Q&A break (10 to 12 minutes)

 

Second slot of presentation (6 or 7 minutes each)

  • Maarit Palovirta Senior Director of Regulatory Affairs, ETNO
  • Thomas Lohninger, Executive Director, Epicenter.works
  • Konstantinos Komaitis, non-resident fellow, the Atlantic Council  

 

 

Participants stressed that, over the past decade, we have witnessed an increasing concentration in few internet platforms, with regard to social media or cloud computing, and such players generate a very relevant percentage of internet traffic. There is a wide range of ongoing regulatory initiatives aimed at frame large platforms, but over the past two years an additional type of regulatory proposal has been surfacing: imposing network fees to large platforms so that they pay their “fair share” of network related costs.

 

In countries such as Brazil, 95% Of users utilize internet access primarily for instant messaging and social Media (e.g. WhatsApp Facebook and Instagram are installed on 95, 80 and 70% of Brazilian smartphone respectively) and virtually all video recordings shared online in Brazil are hosted on YouTube.

 

Large platforms generate enormous amount of traffic, but at the same time, they contribute to network infrastructure costs e.g. by building undersea cables or content delivery networks

 

The most traffic intense platforms have been zero rated for almost a decade by most operators of the world, including those currently proposing “fair share contributions” and in most Global South countries zero rated models are still very common.

 

Some relevant points that need debate and clarification:

 

1) large platforms generate a lot of traffic because they have a lot of customers, not because they engage in any illegal or inappropriate practice; It is true that in most countries they have extremely low level of taxation, compared with their profits, but to cope with distortion it would be much wiser to review their taxation regime rather than simply shift part of their revenues to interne access providers.

 

2) Some regulators or operators have portrayed large platforms as free riders of internet infrastructure. This it is not correct, as platforms also invest enormously in infrastructure, e.g. by building submarine cables and large content delivery networks that are essential to maintain good quality of service good user experience;

 

3) Participants stressed that the topic of fair share with the topic of zero-rating are connected as large platforms have not become responsible for such enormous amount of traffic by chance, but the most traffic intense apps have been zero rated for almost a decade by most operators of the world, as we demonstrated with an empirical analysis, which was to annual output of this coalition already in already in 2018.

 

Actions suggested:

 

More comprehensive analyses of interconnection market is needed, including assessing the role of content delivery network

 

Increased multistakeholder confrontation to foster a better understanding of the issues at stake and whether proposed solutions such as fair share are de facto needed or not.

IGF 2023 DC-PAL Public access evolutions – lessons from the last 20 years

Updated: Mon, 06/11/2023 - 08:53
Digital Divides & Inclusion
Key Takeaways:

There is an increasing disconnect between trends in connectivity and real-world outcomes, even on the basis of the limited data that we have. There is a strong need to invest in stronger data collection as a basis for meaningful internet governance decision-making

,

Public access, as a multipurpose means of helping people make the most of the internet, has proven itself as an adaptable and effective means of achieving people-centred internet development. It has proved its work faced with shocks, in allowing engagement with new technologies, and as a means of localising digital inclusion policies

Calls to Action
The full potential of public access as a way to address the decoupling of progress in extending connectivity and broader social progress needs to be part of internet strategies going forwards
Session Report

Evolutions in Public Access

 

It has been 20 years since the WSIS Action Lines were defined, setting out the importance of connectivity libraries and providing multifunctional public access centres. This fitted into a broader strategy that focused not only on finding rapid and effective ways of bringing the potential benefits of the internet to more people, while acknowledging the importance of a focus on people in order to turn this potential into reality.

 

The introduction to this session therefore set out the question of how public access as a concept has evolved over the past 20 years, as a basis for assessing its continued relevance and to understand how its place in the wider internet infrastructure has changed. It drew on written contributions shared by UNESCO and the Internet Society in particular, which noted, in particular that public access had been proven not to compete with public access, that libraries had proven to be adaptable and responsive, that public access had been a basis for service innovation and partnership, and that the fact of offering other services made libraries particularly valuable as public access venues.

 

Maria Garrido and Matias Centeno (University of Washington) set out the challenge faced, based on data collected as part of the Development and Access to Information report. Crucially, this underlined that good progress in general in bringing people online was not being reflected in other areas seen as vital for making access to information meaningful, in particular around equality and fundamental rights online. This illustrated the potential weaknesses of a tech-only approach.

 

Ugne Lipekaite (EIFL) offered a rich set of evidenced examples of how public access had proven its ability to help solve wider policy challenges, as well as its ongoing essential role in working towards universal connectivity. It had, indeed, been a driver of entrepreneurship and growth.  Crucially, many of the same trends could be observed in very different parts of the world, opening up possibilities for mutual learning in terms of how to develop public access most effectively.

 

Woro Titi Haryanti (National Library of Indonesia) described how public access was at the heart of a national strategy to develop library services as a means of improving lives. Centrally, the emphasis was on ensuring connectivity, providing adaptable content and building staff skills in order to develop programming that could combine public access with other support (including via partners). Thanks to this work, the library was increasingly seen as a partner for wider social development programming.

 

Don Means (Gigabit Libraries Network) underlined that libraries were often early adopters of new technology, providing a means for people not just to get to know the internet, but also new ways of working with it. They had also proven their role in connecting online services with users, for example to ensure that those needing to use eGov services were able to do so. They also offered a crucial backstop of parallel access technology, which boosted resilience.

 

The audience was then asked to share views via Mentimeter. They underlined their agreement with the idea that public access had a key role in the connectivity infrastructure and in future strategies, as well as broadly believing that public access complements other forms of connectivity.

 

 

Key themes that emerged in the discussion included:

  • Public access had proved a structure for delivering on the promise of the localisation of the internet and digital inclusion efforts in particular. Rather than a purely tech-led, supply-side approach, public access centres allowed supply and demand to meet effectively and inclusively.
  • The definition of meaningful access in general needed to include access to meaningful support services for those who needed them in order to make the most of the internet.
  • It was important to develop wider internet resilience strategies, in order to keep things going in times of disaster. Public access was a key part of this.
  • We needed to change the narrative about libraries in particular, and recognise (inside the library sector and outside) their role as agents for digital inclusion.
IGF 2023 Town Hall #134 The Digital Knowledge Commons: a Global Public Good?

Updated: Mon, 06/11/2023 - 08:07
Data Governance & Trust
Key Takeaways:

The digital knowledge commons make a key contribution to what the internet is, with strong potential for growth, through AI, opening collections, and more inclusive practices

Calls to Action

We need to stop regulating the Internet as if it was only made up of major platforms – this risks harming public interest infrastructures

Session Report

Safeguarding the Knowledge Commons

 

As an introduction to the session, the moderator underlined that while shared knowledge resources had initially been included in definitions provided of digital public goods, they were not such a strong focus of subsequent initiatives. In parallel, UNESCO’s Futures of Education report had placed the concept of a Knowledge Commons at the centre of its vision, seen as a body of knowledge which is not only accessible to all, but to which everyone can make contributions.

 

Finally, organisations working around knowledge had long promoted the importance of realising the potential of the internet to enable global access to knowledge, and address barriers created in particular by intellectual property laws.  

 

Tomoaki Watanabe (Creative Commons Japan) underlined the particular questions that new technologies and in particular AI offered, thanks to the generation of new content that could potentially be free of copyright (3D data, scans, AI-generated content). This had the potential to create dramatic new possibilities that could advance innovation, creativity and beyond.

 

While there clearly were questions to be raised around information governance and AI (not least to highlight AI-generated content), copyright appeared to be a highly inadequate tool for doing this.

 

Amalia Toledo (Wikimedia Foundation) cited the connection between the concept of the knowledge commons and the need for digital public infrastructures that favoured its protection and spread – something that was ever more important. Wikimedia represented just such an infrastructure, but remained the only such site among the most used on the internet, with a constant risk of underfunding.

 

Moreover, laws were increasingly made with a focus on commercial platforms, but which caused collateral damage for non-commercial ones such as Wikipedia. Efforts to expand intellectual property laws brought particular risks when they failed to take account of the positives of a true knowledge commons.

 

Subsequent discussion highlighted the following issues:

  • The knowledge commons as a concept raised interesting questions about governance, and in particular how to ensure that it was inclusive and meaningful for everyone. There was a need for actors applying rules, such as Wikipedia and libraries in order to make it functional and sustainable.
  • The need to look beyond copyright as a tool for regulating information flows, given how blunt it was, and in particular in the context of AI to take care in taking decisions. Too often, Generative AI was mistaken for all AI, and policy choices risked imposing major costs even on research and education uses.
  • The value of a more holistic approach to upholding the knowledge commons in general, and the public domain in particular, in order to safeguard them and realise their potential to support wider efforts to ensure that the internet is a driver of progress and inclusion.
IGF 2023 Day 0 Event #161 Towards a vision of the internet for an informed society

Updated: Fri, 03/11/2023 - 19:04
Digital Divides & Inclusion
Key Takeaways:

Importance of localization - if we want to promote inclusive internet we need to localize our approaches

,

Libraries are natural partners for any actor in the Internet inclusion space

Calls to Action

People should re assess their mindset about libraries and see them tech test beds, key sources of content and community infrastructures

Session Report

As awareness grows of the limitations of a purely technological definition of connectivity, as well as of the complex economic, social and cultural implications of the increasing ubiquity of the internet, the need to find a way to realise the goal of a human-centred internet grows. This session drew on the experience of libraries around the world as institutions (staffed by a profession) focused on the practicalities of how to put people in touch with information, and to help them use it to improve their lives. 

Winston Roberts (National Library of New Zealand (retd)) set the scene, highlighting the place of libraries in the original WSIS Agenda, which of course included strong reference to connecting libraries and the value of multi-purpose public access centres. He highlighted that while 20 years had passed, the evolution of the internet had only underlined the importance of having institutions like libraries in order to support universal and meaningful use, as part of a broader approach to internet governance. Thanks to this, it was not only possible to deal with the worst excesses, but also to unlock some of the potential that the internet creates in order to achieve goals around education, social cohesion and beyond. 

Nina Nakaora (International School of Fiji) highlighted the work that libraries had done in particular during the pandemic in order to provide access to learning materials. Again, this illustrated the value of having actors in the wider internet system focused on ensuring that public interest goals were achieved, especially where the market was unlikely to create solutions. She highlighted that, at the same time, to play this role there was a need for libraries to benefit from investment in hardware, connectivity and skills to deliver this.

Rei Iwaski (Notre Dame University, Kyoto) reflected on the Japanese experience of providing information services through libraries. She echoed the point made by Nina Nakaora that this is a potential that can only be realised when libraries are integrated into wider planning. Their cross-cutting missions meant that they often did not fit easily into any one policy box, and also needed to build their own sense of agency as actors in internet governance.

Misako Nomura (Assistive Technology Development Organisation) highlighted the particular situation of users with disabilities. Once again, this illustrated the need to move beyond a laissez-faire approach, and to look at how to connect people with opportunities. Her work included both developing materials for persons with disabilities and ensuring access to technology and wider support. With an ageing population, finding ways to bridge accessibility gaps would be an increasingly important part of wider digital inclusion efforts, and so a strong and properly resourced set of institutions to do this would be essential. 

Woro Titi Salikin (National Library of Indonesia) brought practical examples, again, of the power of facilitating institutions such as libraries in helping people to make the most of internet connectivity in order to deliver real-world change, in particular focused on gender inclusion and supporting entrepreneurship. The Indonesian experience demonstrate that it was possible to make change happen at scale through the right balance of centralised support and local flexibility to adapt services to circumstances. 

The subsequent discussion highlighted the following key points:

- the need to integrate libraries into wider strategies in order to realise their potential. Indonesia offered a strong example, with the close connection between the national library as coordinator of a wider network and central government. Elsewhere, this wasn't the case, and opportunities were being missed

- the fact that librarians too often lacked the sense of agency and skills necessary to fulfil their potential as facilitators of digital inclusion. The sector was at risk of remaining in traditional roles, especially when partnerships with other actors could not be formed. There was a need to build awareness of the responsibility that libraries have in the digital world

- the fact, nonetheless, that libraries do have a unique and flexible role in society which could be mobilised to support a wide range of different agendas

Collectively, the conclusions pointed in the direction of the need to reaffirm the role of libraries, both as a means of activating libraries and librarians themselves, but also to state the case for the place of libraries both as actors in internet governance processes, and as partners for delivery. This is at the heard of IFLA's Internet Manifesto Revision, currently underway, to which all participants were invited to contribute. 

 

IGF 2023 DC-CIV Evolving Regulation and its impact on Core Internet Values

Updated: Fri, 03/11/2023 - 14:21
Avoiding Internet Fragmentation
Key Takeaways:

1. The Internet has been self organising with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into space with no borders there are questions marks as to how its Core Values will be sustained.

,

2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.

Calls to Action

- The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.

,

- Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

Session Report

 

DC-CIV Evolving Regulation and its impact on Core Internet Values

Report on the Internet Governance Forum (IGF) Session.

Main Report

The Core Internet Values, which comprise the technical architectural values by which the Internet is built and evolves, also comprise ‘social’ or, in other words, ‘universal’ values that emerge (or derive) from the way the Internet works.

The Internet is a global medium open to all regardless of geography or nationality. It is interoperable because it is a network of networks. It doesn't rely on a single application. It relies on open protocols such as TCP/IP and BGP. It is free of any centralized control, except for the needed coordination of unique identifiers. It is end to end, so traffic from one end of the network to the other end of the network goes. It is user centric and users have control over what they send and receive and it is robust and reliable.

The Dynamic Coalition on Core Internet Values held sessions at every previous IGF. During the 2023 IGF at Kyoto, the Coalition discussed the topic of "Avoiding Internet Fragmentation"  with "International Legal Perspectives" as the sub theme - all part of this year’s “Internet We Want”.

The following questions were examined during the session:

  • In a changing world and a changing Internet, should the Internet stick to its Core Values?
  • Should more legislation be needed? If yes, then how should it be drafted?
  • What are the risks of "changing" the Core Internet Values for the future of the Internet?
  • Could we end up with fragmentation? With the end of the Internet as we know it?
  • Could we end up with a better, safer, cleaner post-Internet network of networks? Is this achievable or is this a pipe dream? Does this have an impact on democracy across the world?

Panelists included  Lee Rainie, Jane Coffin, Nii Quaynor, Iria Puyosa, Vint Cerf with interventions from the floor moderated by Sébastien Bachollet as Co-Chair at Kyoto together with Olivier Crépin-Leblond.

Deliberations

The deliberations during this meeting by panelists' presentation, participant interventions and Q&A are reported here without attribution to the specific panelist or participant.

Broadly and roughly  there have been four notable 'phases' that could be seen as 'revolutions' in Internet evolution:

  • Home broadband. It sharply increased the "velocity of information" into people’s lives, bringing support for the way it democratized creativity, story-telling and community building. But it also spawned concern about misinformation, for example, in the medical community – and concern about the type of content to which children might be exposed. 
  • Mobile connectivity. Mobile phones became ubiquitous and became all-purpose “extra body parts and brain lobes” that allowed people to reach out and be contacted at any time, anywhere, without the need for knowledge on how to operate a computer. But a backlash grew about the ways in which phones disrupted people’s time use and attention allocation.
  • Social media.  Exposed users to new information and allowed them new ways to share their lives and create. The backlash has focused on the impact of social media on people’s emotional and mental health (especially for younger women and girls), the way social media can be used for information war purposes, enabled political polarization and tribalism, and menacing behavior like bullying and physical threats.
  • Artificial intelligence. Often functioning unnoticed and uncommented upon, AI allowed people to live their lives more conveniently, efficiently and safely. It promised productivity increases. But the backlash starts with people’s inherent wariness of anything that might challenge their rights, their autonomy and their agency. There are widespread concerns about job loss, bias and discrimination, and whether AI can be used ethically. 

It is worth noting that these and other concerns have mostly arisen at the level of applications, rather than the essential architecture of the Internet. Unfortunately, the concerns at the cultural, legal and social level usually drive policy deliberations that could limit the way the Internet functions.

Users almost unanimously support the Core Values of the Internet: open, free, secure, interoperable, end-to-end, permissionless innovation. The revolutions and the backlash they engendered:

Beyond those general concerns about digital functions, there is evidence that different people have different experiences of those revolutions. Those group differences drive concerns and calls for further regulations. At the group level, it is clear that divisions by gender, age, race/ethnicity, class, nationality, religious affiliations, affect people’s online experiences. There are also divisions along the lines of people’s level of awareness and their knowledge about technology, and their traits cause them to experience and react to technology differently. 

To further complicate the picture, it is clear that individual technology users act in different ways under different circumstances. They are not necessarily predictable and their actions are often contingent, transactional, and context specific. This makes it very hard for those designing policies to take into account the variety of ways people will use technology or have concerns about its impact on them.

In global surveys and other research, there is a division that pits individuals against society. Individual actors are often confident they can navigate the problems of information and communication ecosystems, but others are incapable of doing so. That results in an almost universal sense that “I’m OK, but the rest of the world is not”. 

How should policy makers understand that and take account of such an array of social, cultural, and legal variance as they try to think about regulations for the Internet? It is a chaotic picture that suggests that policy proposals affecting the basic functioning of the Internet should be undertaken with great caution and much humility.

The Internet has been self organizing its network of networks with as little regulation as possible for them to work. There is a lot of support for this self-organization on the network level even though in some cases the shared objective of developing networks for people who do not yet have access appears to have been lost.

Regulate

Caution is advised when facing pressure to “regulate fast... because some serious harm is upon us". Quick and ill-designed regulations may undermine online freedoms or lead to Internet fragmentation.

Before regulating, it is necessary to assess the tradeoffs of different policies as well as the suitable technical implementations of those policies.

Unfortunately, pressure to legislate is driven by public opinion on harms - often emphasized by governments to impose legislation. Law enforcement requests for access to private communications, national security, and cyber-sovereignty agendas dominate public debate in most countries.

The Internet will not be the same if it is run in a non open way - and we can see that with countries where there is a zeal to pass laws to "protect the interests of the regimes".

The intent may have originally been laudable but they may also have side effects.

For instance, we observe this problem in legislation threatening end-to-end encryption under the urge to provide more safety for children online, legislation establishing widespread Internet surveillance pretexting rising concerns related to violent extremism, cyber-sovereignty agendas undermining net neutrality, and cybersecurity policies that pose a risk to interoperability. 

Technical solutions to online harm must ensure respect for human rights and the rule of law in line with the principles of necessity and proportionality. Any restriction of access to the Internet must be lawful, legitimate, necessary, proportional, and non-discriminatory.

Civil society and the Internet technical community must continue collaborating in facing overregulation trends threatening Internet Core Values.

Some participants in the meeting pointed to further study in countries like Finland and Estonia, that have advanced in terms of e-governments. It was also mentioned that the borderless nature of the Internet would expand with a more widespread use of “satellite Internet” and Internet Exchange Points in Space - thus bringing a new perspective on cross-border issues.

Key Takeaways 

  1. The Internet has been self organizing with as little regulation as possible for it to work and if strong regulation is introduced it will hinder its technical functioning. Too much regulation will damage interoperation. As Internet networks evolve into Space with no borders there are question marks as to how its Core Values will be sustained.
  2. One of the major policy tensions in digital life pits anonymity against accountability. Anonymity has been a key aspect of Internet activity, but we have painfully learned that full anonymity can be exploited in ways that allow bad actors to escape being held accountable for the harms they cause. Systems must be developed to bring accountability without compromising essential anonymity - and layering identity levels is one way to do it.
    Such systems must be designed with clear and minimal implications for deep architectural changes. A layered approach (possibly in the application layer) may be desirable. 

Call to Action

  1. All stakeholders should actively engage in understanding, appreciating, and expanding knowledge of the Internet’s Core Values and the damages that may arise from actions that, deliberately or as unintended consequences, impinge negatively on them. The list is not long and it starts by layered architecture, packet switching, “best effort” i.e. design for resilience against failure, interoperability, openness, robustness (Postel), end-to-end (meaning that most functions that are not packet transmission are a responsibility of the “edge”, and implying network neutrality), decentralization, scalability, and, as a consequence, universal reach and “permissionless innovation”.
  2. Laws, norms, and treaties must all be commensurate with these values and only impinge on any of them after a deep analysis by all stakeholders, and with safety valves to avoid irreversible unexpected consequences down the road. 
  3. The Internet community including the private sector, civil society, technical community should actively engage with governments to make them understand why a multistakeholder IGF is important.
  4. Use of encryption needs to continue - as without encryption many of the functions of the Internet's safety will be negatively impacted.

 

IGF 2023 WS #570 Climate change and Technology implementation

Updated: Thu, 02/11/2023 - 21:48
Sustainability & Environment
Calls to Action

Enhancing legal compliance and accountability in implementing environmental laws requires global efforts from governments, private sectors, and international organizations.

,

Sustainable digital transformation, involving transparent policies, sustainable design, and accessible technology solutions, is crucial to address climate challenges, requiring global collaboration and immediate action from all stakeholders.

Session Report

 

The intersection of sustainability, digitalization, and climate change has become a crucial topic in today's global concerns. This report synthesizes the key points discussed by the speakers from the session. These experts provided insights into how the digital age can both exacerbate and alleviate climate challenges, and their recommendations to address this complex issue. The Key Takeaways of the session were:

  • Digitalization and Its Environmental Impact: The speakers began by highlighting the growing significance of electric and autonomous mobility, emphasizing that digital technologies, especially electric vehicles (EVs) and autonomous mobility, place significant demands on energy production and computational power. This shift creates new challenges, such as the allocation of electricity from the national grid to EV users and the need for updated policies to accommodate this transition.
  • Insights into the European Union's strategy of a twin transition: Combining green and digital transformations were also shared. With emphasis on ambitious climate goals, such as a 50% reduction in emissions by 2030 and climate neutrality by 2050. To align sustainability with digitalization, the speaker proposed enhanced transparency regarding the environmental impact of digital devices, promoting entrepreneurial thinking for sustainability, and embedding ecological sustainability into design processes.
  • The importance of affordable and accessible technology solutions: There were concerns about the lack of necessary infrastructure to implement expensive technologies in many countries, as well as legal disputes and accountability related to environmental protection laws, emphasizing the need for effective enforcement and compliance mechanisms.
  • AI in Climate Mitigation and Adaptation: In mitigation, AI can optimize electricity supply and demand by considering weather conditions and electricity usage patterns. For instance, building energy management systems using AI can significantly reduce energy consumption during peak times. AI also contributes to climate adaptation by enabling the development of early warning systems and improving climate forecasting. These technologies allow us to take early countermeasures and ensure a stable food supply.
  • Negative Environmental Impacts of Technology: While technology offers solutions for climate change, it also presents environmental challenges, such as the energy consumption associated with electronic devices, data centers, and communication networks primarily powered by fossil fuels. The entire life cycle of electronic devices, from manufacturing to disposal, contributes to energy consumption and carbon emissions. Hazardous chemicals and e-waste pose environmental risks when not managed properly, especially in developing countries.

The discussions by various speakers highlighted the following unified actions: Ensure that digital technology contributes to sustainability goals and consider the environmental impact of digital devices; Invest in research and development to create green and energy-efficient technologies, especially for regions with increasing energy demands; Advocate for effective enforcement mechanisms and accountability in environmental protection laws globally; Encourage responsible consumption by extending the life cycle of electronic devices, reducing e-waste generation, and adopting sustainable practices in manufacturing; and Encourage collaboration between governments, businesses, research institutions, and individuals to harness the full potential of technology in combating climate change.

The global discussion on the intersection of sustainability, digitalization, and climate change is multi-faceted and addresses various challenges and opportunities, and needs more action from governments, civil society and the private sector. Through these unified calls to action, the digital age can be harnessed to mitigate climate change and transition toward a more sustainable future.

 

IGF 2023 WS #209 Viewing Disinformation from a Global Governance Perspective

Updated: Thu, 02/11/2023 - 19:12
Global Digital Governance & Cooperation
Key Takeaways:

1. A more nuanced approach to disinformation is called for, which should not only focus on social networks or digital platforms but also consider the wider media landscape. Furthermore, more empirical research is needed to realistically assess the dangerousness of disinformation. We should not simply take for granted the effect of disinformation on people's thinking and (voting) behaviour.

,

2. There is not one global solution against disinformation that works in every instance or context. It is unlikely that governments agree on how to address disinformation. However, what is needed is a common set of principles that guides how we think of and act upon disinformation. Human rights and access to information must be front and center of such principles.

Calls to Action

1. Regional human rights courts need to be resourced in a way that they can function as mechanisms in the regulation of disinformation.

,

2. High quality journalism is an effective means against the impact of disinformation but faces an uncertain future. More work needs to be done to strengthen independent journalism particularly in countries with a high incidence of disinformation.

Session Report

 

Workshop Report  - IGF 2023 WS #209: Viewing Disinformation from a Global Governance Perspective

 

Workshop process

Part 1:

The workshop opened with the moderator asking participants to stand and gather along an imagined line on the floor in the room based on the extent to which they agreed or disagreed with the following statement: "Disinformation is undermining democratic political participation". The moderator then walked around the room and asked people to share their views and why they agreed/disagreed or stood somewhere in the middle. They were encouraged to shift their position if the discussion led to them rethinking their initial opinion.

Views in the room were diverse.  Almost all participants stood in the area signifying agreement with the statement.  Several offered examples from their countries and larger experiences that they believed demonstrated a strong causal link between disinformation and democratic erosion.  Two people, including one of the speakers, stood in an intermediate position and argued that a nuanced and contextualized approach is needed in examining cases so a binary choice between “is/not causing” was not adequate.  One person stood in the area signifying no impact of disinformation.

The moderator also asked the panelists to share their perspectives, and, in doing so, to respond to the question: “What is disinformation, is it a serious problem, and if so, why (or why not, if you believe it is not a serious problem)?”

Interactive discussion on this question between participants and the panelists continued for about 25 minutes. One of the panelists responded by asking what impact of disinformation we care about. He also suggested that disinformation is an umbrella term that is too broad as a basis for regulation. A person from the audience added that disinformation is not new and that each media has been abused for purposes of propaganda. One panelist pointed out that there is a lack of empirical evidence about the impact of disinformation. Most of what we know concerns the production and dissemination of disinformation while its effect on people’s worldviews and voting behaviour is mostly taken for granted. Recent research suggests that disinformation amplifies extremist beliefs rather than instigating them. As a closing question, the moderator asked participants if any of them lived in

contexts where disinformation does not have a major impact. Two people responded to say that in their countries disinformation does not appear to be causing much harm due to the presence of a serious and legitimized mass media and other factors. A panelist concluded that high quality journalism is the best way to combat disinformation.

Part 2

The second question put to the panel and the participants was: “Can disinformation be regulated internationally? How strong and clear a baseline do existing international instruments provide for the governance of disinformation? What are the implications for rights to access information and freedom of expression?

There was no common view on whether disinformation can be regulated internationally. Panelists doubted whether there can be one solution for all the different forms of disinformation. There was agreement on the need for a common set of principles to guide how we think of and act upon disinformation. Human rights, particularly Article 19, which protects freedom of expression and information must be front and center of such principles.

One speaker briefly flagged three examples of efforts to devise international Internet governance responses to disinformation.  These included some problematic proposals for binding treaty commitments among governments that have been floated in the UN cybersecurity and cybercrime discussions; the European Union’s Code of Practice on Disinformation; and the UN Secretary General’s proposed Code of Conduct for Information Integrity on Digital Platforms.  It was pointed out that while the first example involved efforts to devise constraints on state behavior that would never be agreed in geopolitically divided UN negotiations, the second two involve codes of practice pertaining mostly to the providers and users of digital platforms.  It was noted that while platforms certainly have responsibilities, focusing largely on them rather than on the governments that produce or support the production of a lot of disinformation is quite a limitation.  There are also open questions around the reliance on codes and guidelines varyingly interpreted and implemented at the national level.

The next question was: “Concerning new governance initiatives, what sort of consultation and decision-making process is best suited to the governance of disinformation, and can the IGF assume a role in the process?

This provoked a very interesting discussion. Participants involved in the Christchurch Call shared how they put multistakeholder consultation at the centre of their efforts to combat online extremism. The key lessons they shared that are relevant to responding to disinformation was (1) the multistakeholder approach has been critical to create trust among the actors involved, (2) the need to take time and form partnerships with diverse actors involved, (3) to keep the scope and focus really tight and (4) not to rush into regulatory intervention.

Part 4 - Closing

In closing, panelists offered their main take-aways, including things they did and did not want to see.  There were calls for better empirical research and evidence about the effects of disinformation; for more nuanced policy responses, including avoidance of governments using “moral panics” on disinformation to justify restrictions of human rights; for multistakeholder participation in crafting governance responses; and for hefty fines on Elon Musk’s X for violations of the EU’s rules.

IGF 2023 Day 0 Event #177 Transforming technology frameworks for the planet

Updated: Tue, 31/10/2023 - 20:36
Sustainability & Environment
Key Takeaways:

Cooperative models and approaches to technology have created pathways for communities and movements to address their needs, including for digital inclusion and decent work.

,

It is critical that technological responses to planetary crises do not adopt a single model or approach, but rather support diverse community-led and cooperative models that centre care and solidarity.

Calls to Action

Governments must ensure that the precautionary principle is upheld in digital governance norms and standards, including policy responses to the role of technology corporations in carbon offsetting, and geoengineering.

,

All stakeholders must work to support models of technology that centre care and solidarity.

Session Report

On 7 October, 2023, the Association for Progressive Communications (APC), Sula Batsu, Nodo TAU and May First Movement Technology convened a pre-event discussion to the global IGF, focusing on cooperative models and approaches to transforming technology frameworks for the planet.

During the discussion, speakers from Sula Batsu, Nodo TAU and May First Movement Technology shared experiences from their work, emphasizing the critical importance of participation and accountability in cooperative models and approaches to technology.

Kemly Camacho reflected on the experiences of Sula Batsu in learning how to put care at the center of their business models using approaches that are rooted in feminism, solidarity, and collective care.

Speaking from the experiences of May First Movement Technology, Jaime Villareal shared his perspective on the importance of members of May First being able to collectively own, govern and maintain autonomous infrastructure.

From Nodo TAU, Florencia Roveri described the processes and challenges of transforming their e-waste management and recycling plant into a cooperative, and the value of working with existing cooperatives. Florencia reflected on the need to extend responsibility for electronic waste, and shift perspectives on the dangers of discarded technology.

Yilmaz Akkoyun, Senior Policy Officer of the German Federal Ministry for Economic Cooperation and Development (BMZ), reflected on the discussion from the perspective of the BMZ priorities for digitalisation, emphasizing that cooperation is essential in a holistic approach to address the root causes of the complex problems facing the world today.

Becky Kazansky, a postdoctoral researcher at the University of Amsterdam, framed the discussion of cooperative approaches to technology by reflecting on recent policy developments, and the importance for all stakeholders not to get distracted by technologies and tools that on the surface seem quite promising for mitigating and adapting to climate change, but have proven to be quite harmful for communities around the world.

On-site participants in the event shared questions and reflections on how transforming technology frameworks can be supported in practice, including through amplifying the work of cooperatives like Sula Batsu, Nodo TAU and May First Movement Technology.

Speakers emphasized the need for robust and community-led accountability mechanisms, support for environmental defenders, and shifting perspectives and narratives towards more technology frameworks that prioritize collective care.

IGF 2023 WS #500 Connecting open code with policymakers to development

Updated: Tue, 31/10/2023 - 02:17
Data Governance & Trust
Key Takeaways:

Interest on data sets such as GitHub's Innovation Graph: https://innovationgraph.github.com/ as a way to approach private sector data for public sector research.

,

Discussion on challenges with skills technical staff within government to implement open source tools and how to tackle the myths some may have about open source.

Calls to Action

Some topics were too broad and could be narrowed down for more in depth discussion.

,

There was interest in the process of simplifying the process of using private sector data for policy making

Session Report

Connecting open code with policymakers to development

 

This session built on the work of numerous different agencies with speakers from the Government of France Digital Affairs office, GitHub Inc., and LIRNEasia. This session focused on the theme of ‘Data Governance & Trust’ and how private sector data in general and technology platform metrics in particular can inform research and policy on technology maturity, innovation ecosystems, digital literacy and the monitoring of progress towards SDGs at the country level. GitHub is the world's largest platform for collaborative software development, with over 100 million users. GitHub is also used extensively for open data collaboration, hosting more than 800 million open data files, totaling 142 terabytes of data. Their work highlights the potential of open data on GitHub and also demonstrates how it can accelerate AI research. They have done work to analyze the existing landscape of open data on GitHub and the patterns of how users share datasets. GitHub is one of the largest hosts of open data in the world and has experienced an accelerated growth of open data assets over the past four years and ultimately contributing to the ongoing AI revolution to help address complex societal issues. LIRNEasia  is a pro-poor, pro-market think tank. Their mission is to catalyze policy change and solutions through research to improve the lives of people in the Asia and Pacific using knowledge, information and technology. Joining the panel was also Henri Verdier, the French Ambassador for Digital Affairs within the French Ministry for Europe and Foreign Affairs. Since 2018, he leads and coordinates the French Digital Diplomacy.  He previously was the inter-ministerial director for digital information and communications systems (DG DINUM) of France; and he was the director of Etalab, the French agency for public open data.

The session opened with an overview of what connecting open code with policymakers and the previous efforts made on this topic. There has been research done on this and the panel highlighted GitHub’s work on Partnering with EU policymakers to ensure the Cyber Resilience Act works for developers. While in France, there has been policies on the implementation of “open source software expertise center” set up in the Etalab which is a part of the interministerial digital department DINUM. It is a part of an effort of setting up open source offices in governments that can be observed throughout the public administrations in Europe. The expertise center will be supported by other initiatives of the government such as projects within the TECH.GOUV programme aimed at accelerating digital transformation of the public service. Other efforts such as the French government’s roadmap for developing open source to make it a vector of digital sovereignty and a guarantee of “democratic confidence” is part of the conversation. Leading to the topic on the challenges from unmet data needs that can be supported by private sector data for development purposes which GitHub announced the Innovation Graph. The GitHub Innovation Graph dataset contains data on (1) public activity (2) on GitHub (3) aggregated by economy (4) on a quarterly basis on GitHub public data.

 

Finally, the panel session concluded with discussion on data privacy & consent as well as efforts to promote and support open code initiatives globally. There was extensive interest by attendees on how to encourage participation and capacity building locally, and encourage more open source development within governments.

 

IGF 2023 DC-Gender Disability, Gender, and Digital Self-Determination

Updated: Mon, 30/10/2023 - 11:43
Digital Divides & Inclusion
Key Takeaways:

Accessible design: not an afterthought, mobile phone-friendly, with easy interfaces. A multistakeholder approach to digital accessibility where the onus is not just on people with disabilities to fix the accessibility problems. Involving persons with disabilities in technology design and development processes - learning from experiences across genders, sexualities, class, caste locations. Integrating digital accessibility in formal education.

,

Thinking about how accessible and affordable technology is for people with disabilities across caste and class locations. Accessibility barriers are also defined by who builds tech and who it is built for. What an inclusive policy framework can look like: ideas of inclusiveness that aren’t homogenised but are representative of a spectrum of disabled experiences.

Calls to Action

A paradigmatic shift in how technologies are designed and developed. Instead of developing them at scale, accounting for nuanced and individual use experiences, and creating customised tech centred around layered and individualised experiences, rather than a one-size-fit-all approach.

,

Involving persons with disabilities in developing technologies as well as policies - recognising people with diverse disabilities as part of the digital ecosystem and digital spaces. Developing technologies and policies taking into account the diverse experiences of persons with physical and psychosocial disabilities and different layers of accessibility barriers when it comes to inhabiting and occupying digital spaces.

Session Report

 

Lived experiences

Vidhya Y:

  • Digital space is huge - when we say tech, that’s the only way as a blind person I can communicate with the world. It opens up opportunities. Growing up in a village, I didn’t have access to tech and missed out on a lot. But when I got on to online platforms, there was so much I could do. I could access the news, know what time it is, communicate via emails. Most people don’t understand braille. 
  • Taking help from someone to type messages would mean I don’t have privacy over messages I want to say. Digital platforms have enabled many disabled people to have privacy and more autonomy over their choices.
  • Websites aren’t designed in a way all can access. There are a lot of images that aren’t labeled. 
  • For women with disabilities, the barriers are too many! It’s an irony. Digital platforms have given a lot of privacy but at the same time, you have to be so careful. When Covid happened and people were trying to get on online platforms, video calls were a must. I’d adjust my screen to point a bit downwards so people are not able to see much of me. But my sister observed and told me that the camera is actually at the top of the monitor and if you put it down, people can see you more clearly. 
  • I feel I have to take second opinion about a lot of things in the digital space. New things are coming up all the time.
  • When you’re using a screen reader, if you’re in a crowded place, you tend to misread content. Voice messages also have privacy issues: eg. in conferences I’m unable to use voice message.
  • Typing maybe easier if you have some other disability, but it’s a huge issue for visually impaired people. 

Gunela Astbrink:

  • A young woman in Africa, a wheelchair user, has speech impairments, limited use of one hand. She was determined to study IT and went to school, vocational college, and now she sometimes tutors other students. The way she uses smartphone/laptop is with her knuckles. That’s how she communicates with her digital tools.
  • When a person with a disability is online, there’s often a sense that we are all digital beings, and there’s an assumption that we’re all on the same level and will be able to use all tools. However, this isn’t the case. Tools, websites, platforms need to be made accessible. Important for tools and learning platforms etc. to be developed along with PwDs. 
  • Nothing about us without us - so that PwDs are able to be part of development and part of the digital community.

Privacy and security concerns

Vidhya Y:

  • Digital tools enable you to do a lot of things yourself, which wasn’t possible earlier. There are color recognisers, apps to tell you which currency you’re using, apps where sighted people sign up as volunteers for solving captchas etc. Captchas are designed as not being designed for machines so privacy isn’t compromised, but this is a barrier for many persons with visual impairments, if audio captchas are not enabled. Even if you can use a computer. If I want to get help in Kannada, local language, I won’t get help at night. But if you need help in English, there will be someone to assist you.
  • I conducted digital literacy trainings with school teachers. Guided them to installing these tools - we found really good uses: you can call them and the volunteer who picks up the phone, they’ll tell you to point your camera at the captcha on the computer. And guide you accordingly. People have used these technologies to even take support in matching their sarees with their bangles.
  • But you’re forced to depend on others at certain times. You’re also wary about where you’re pointing camera - what the other person can see - what data is being collected. At the end of banking transactions, if you have to enter captcha, you have to enter all other details beforehand, which means the person supporting you can see what all you have typed. It’s a huge privacy compromise.
  • Privacy concerns around how much of you should be visible to the other person: apart from your voice you aren’t sure what else is visible. A concern for women with disabilities.
  • For FB, IG etc.: If I were to upload photos I’ve taken during this conference to FB, my cousin will give me the photos with captions. But I don’t know if I’m missing anything in the photos - as I’m relying on the captions. Sometimes people have told me, only half your face is visible, or this photo shouldn’t have been taken.

 

Padmini Ray Murray:

  • Every device we use is compromised by some form of surveillance, and it’s very difficult for non-disabled people to wrap their heads around being online, use these devices and think about how to maintain their privacy.
  • Most devices or apps - even if they’re made for disabled users, might not be taking these considerations into account - while they’re being designed.
  • While there are accessibility guidelines, those are often just the baseline, and there’s much more nuanced requirements of disabled users that need to be taken into account.

 

Imagining inclusive tech

Manique Gunaratne:

  • Through assistive devices and tech, we’re able to work in an equally capable manner with non-disabled people.
  • The problem is often the cost factor in accessing technologies. Eg. for hearing impaired persons, they cannot hear if someone rings the bell. But they can access a picture of doorbell ringing through a smartphone.
  • For visually impaired people, smart glasses can identify what’s around us and provide a description of the surroundings.
  • For people with mobility difficulty, apps and technologies can help them find spaces they can access - restaurants, movie theater etc. Through hand gestures or facial expression if they can operate computers, they can also be employed and economically active.
  • Tech operating through brain functions.
  • Entertainment is not only for people without disabilities. Games, etc. need to be accessible. 
  • Technologies to give emotional recognition, especially for autistic people or those with intellectual disability.
  • Smart homes: PwDs can cook food of their choice, make domestic choices etc.

Judy Okite

  • For a long time, we’ve been advocating for physical accessibility at the IGF - hope it’s better this year. 
  • One of the things we did with KICTANet this year: Evaluated 46 govt websites, just to see how accessible information is for PwDs. Unfortunately, the highest they got was 80%. The feedback from the govt was interesting: people felt if you’re at 80% you’re at a good space. But actually it means 20% of your content is not accessible to PwDs.
  • From research we did: more emphasis is placed on persons who are blind when it comes to digital content. But persons with cognitive disability are more disadvantaged. If the content is not understandable/perceivable, then you’ve lost this person - they will not be able to interact with your content.
  • In Kenya, only about 2 years ago, cognitive disability was recognised as a disability. So we can see how far we are on inclusion. 
  • How do we ensure that PwDs are part of our change - not just because they want to, but because they have to be a part of the process.
  • Forum for Freedom in Jerusalem - in Tanzania - they know my needs on physical platforms - worked with them before. There was a ramp, but I still needed to be lifted up to reach the ramp. They had an accessible room but very small cubicles for washrooms - so I called the guy from the reception who came with a wheelchair and I requested him to push it into the washroom. He asked how can I do that? I asked him back, how do you expect me to get in the washroom then?
  • If they had included a PwD to be a part of this process, the ramp or the washroom wouldn’t have been this bad. Being deliberate in having PwDs as part of the process, the change.

Nirmita Narasimhan

On policy and regulatory processes

  • Important to have policies - ensures that people are aware there’s a need. Mandated. Recognised by law. The fact that there’s a legal and social requirement and responsibility to comply with standards is important in ensuring that accessibility is there. Countries that have policies are better placed in terms of how accessibility is implemented.
  • A lot of countries have implemented the CRPDA - domain specific policies need to come as well. Depends on different strategies and situation.
  • Eg. In India when we had to lobby for the copyright law, we had to do a lot of research on what are the legal models available everywhere. We ran campaigns, meetings, signature campaigns etc. On the other hand, when we look at electronic accessibility, we had meetings with electronics and IT departments, and that’s how we worked with them to develop a policy. While developing the procurement standard in India, we worked with agencies, industries, academic groups etc. on what the standards should be and how they will be implemented. The idea is to get different stakeholders involved and be responsible for this.

Concluding thoughts

Padmini Ray Murray

  • The biggest challenge we struggle with is when we design/develop technologies, we try to do it at scale, which means more nuanced and individual use experiences become harder to provide. This requires a paradigmatic shift in how tech is built - creating customised products. More layered and nuanced. More individualised and personalised experiences rather than one-size-fits-all.
IGF 2023 WS #457 Balancing act: advocacy with big tech in restrictive regimes

Updated: Mon, 30/10/2023 - 11:37
Human Rights & Freedoms
Key Takeaways:

Increasingly authoritarian states are introducing legislation and tactics of online censorship, including internet shutdowns, particularly during politically sensitive periods. There is an urgent need for civil society and big tech to coordinate in mitigating risks to online free expression posed by sweeping legislative changes and practices empowering authoritarian states.

,

Lack of transparency in big tech's decision-making process, in particular regarding authorities’ user data and takedown requests, exacerbates mistrust and hinders effective collaboration between big tech and civil society, especially under authoritarian regimes. At minimum, platforms should develop comprehensive reports with case studies and examples on their responses in order to keep the civil society groups informed and in the conversation.

Calls to Action

Civil society and big tech should initiate structured dialogues to create a unified framework for responding to legislation and practices that threaten online free expression, including internet shutdowns at the national, regional and global levels including through multi stakeholder fora such as the GNI.

,

Big tech companies must commit to radical transparency by publishing detailed policies and data on content moderation and government requests. The companies should establish a dedicated team that engages directly with local civil society, sharing information openly to address nuanced challenges faced in specific geopolitical contexts.

Session Report

Session report:

The session brought together a diverse group of stakeholders, including representatives from civil society, big tech companies, and policy experts, to discuss the pressing challenges of online censorship, data privacy, and the role of big tech and civil society in authoritarian states. The session also highlighted the importance of multi-stakeholder dialogues and offered actionable recommendations for all parties involved.

The session highlighted that any meaningful progress on ensuring access to the internet and combating censorship online in restrictive regimes can only be achieved in a broader context, in conjunction with addressing the lack of rule of law, lack of independent judiciary, crackdown on civil society and absence of international accountability. 

Key discussions:

  • Legislative challenges: Participants highlighted the rise in authoritarian states introducing legislation aimed at online censorship, often under the guise of national security or cybercrime laws. These laws not only enable content censorship but also force platforms to share user data, posing significant human rights risks and chilling effect for online expression.
  • Big tech’s responsibility: There was a general consensus that big tech companies have a significant role to play in this landscape. There was also a strong sentiment that platforms need to step up their efforts in countries like Vietnam, where civil society has limited power to effect change due to authoritarian rule.
  • Lack of transparency, especially in big tech’s decision-making processes in particular regarding authorities’ user data and content takedown requests, was a recurring theme. This lack of transparency exacerbates mistrust and hinders effective collaboration between big tech and civil society. Additionally, it allows authoritarian governments to apply informal pressure on platforms.
  • Other barriers that hinders collaboration between big tech and civil society that were flagged by civil society included issues with the current mechanisms available for civil society to engage with big tech - long reaction time, little progress, no consistent follow-up, concealed results of bilateral meetings between the government and the platforms and the fact that country focal points are often in contact with the government especially in oppressive regimes which puts activists at risk. 
  • Civil society's role: Civil society organisations emphasised their ongoing efforts to hold big tech accountable. They also highlighted the need for more structured dialogues with tech companies to address these challenges effectively.
  • Multi-stakeholder approach: Both civil society and big tech representatives agreed on the need for a multi-stakeholder approach to tackle the issues. There was a call for more coordinated efforts, including monitoring legislative changes particularly in the face of rapid changes in the online space.
  • Remote participants: Feedback from remote participants underscored the urgency of the issues discussed, particularly the need for transparency and multi-stakeholder dialogues.

Turkey and Vietnam as case studies

Turkey and Vietnam were discussed as case studies to illustrate the increasing challenges of online censorship and government repression in authoritarian states. Both countries have seen a surge in legislation aimed at controlling online content, particularly during politically sensitive times, and both grapple with the complex role of big tech in their unique geopolitical contexts. Big tech in both countries face a difficult choice: comply with local laws and risk aiding in censorship, or resist and face being blocked or penalised.

The civil society representative from Vietnam highlighted that Facebook has a list of Vietnamese officials that cannot be criticised on the platform, highlighting the extent of government influence. Facebook and Google have been complying with the overwhelming majority (up to 95%) of content removal requests. Activists also pronounce big tech’s inaction in the face of the growing problem with the state-back online trolls. 

Some concrete examples showcasing successful advocacy and collaboration between big tech and civil society groups were discussed, such as, in 2022, the government in Vietnam turned the hard requirement of storing data locally to a soft requirement after civil society activism mobilised platforms to lobby with the government. 

In the case of Turkey, an amendment package passed in October 2022, introduced up to three years of imprisonment for "spreading disinformation and imposed hefty fines for big tech companies, including up to 90% bandwidth throttling and advertising bans for non-compliance with a single content take-down order, further complicating the operating environment for big tech companies. Companies are now also required to provide user data upon request of the prosecutors and courts in relation to certain crimes. 

The panel highlighted that this set of laws and lack of transparency allow authoritarian governments to place big tech under significant formal and informal pressure. The threat of throttling in the event of a non-compliance with government requests creates a particularly heightened chilling effect on platform decisions and their responsibility to respect human rights.

On the eve of the general and presidential elections on 14 May 2023, YouTube, Twitter and Facebook restricted access to certain content that involved videos critical of the government and various allegations of crime and corruption against the ruling AKP. While YouTube did not issue any public statement about the censorship on their platform, both Twitter and Meta noted in their public statements that Turkish authorities had made clear to them that failure to comply with its content removal request would lead to both platforms being blocked or throttled in Turkey.

In its transparency report, Meta explained that their top priority was to secure access of civil society to their platforms before and in the aftermath of the elections; and they made the decision to comply with government requests to remove the content because, although critical of the government, the content was not directly linked to election integrity. 

The panel also discussed that GNI principles state that ICT companies should avoid, minimise or otherwise address the impact of government demands if national laws do not conform to international human rights standards. The initiative also focuses on capacity-building within civil society to engage effectively with tech companies. The representative from GNI also mentioned a tool called “Human Rights Due Diligence Across the Technology Ecosystem” which was designed to formulate constructive asks to the relevant stakeholders depending on whether this is a social media platform, telecom company or a cloud provider. 

Recommendations for big tech:

  • Develop contingency plans to protect access to platforms during sensitive periods
  • Conduct human rights due diligence before taking any compliance steps
  • Actively engage with local NGOs and invite them for consultations
  • Full disclosure of government requests and compliance actions (Twitter’s publication of the government’s communication on censorship ahead of the Turkish elections was a step in the right direction)
  • Tackle the rise of internet trolls 
  • Protect civil society groups from false mass reporting and illegitimate account suspensions 
  • Expand end-to-end encryption for users' data privacy 

Recommendations for civil society:

  • Closer coordination on how to advocate for digital rights to avoid fragmented, unimpactful calls and align strategies to create a stronger stand against the government’s actions
  • Work together with platforms to formulate a multi-pronged strategy envisaging both private sector and civil society perspectives
  • Work towards increasing public literacy on digital rights 
  • Bring international attention to these critical issues

Recommendations for states:

  • Diplomatic efforts must extend to digital rights e.g. make them a proviso in trade agreements 
  • Financial and logistic support for NGOs

 

IGF 2023 Open Forum #59 Whose Internet? Towards a Feminist Digital Future for Africa

Updated: Mon, 30/10/2023 - 08:42
Data Governance & Trust
Key Takeaways:

It might have become progressively easier for women to participate meaningfully in policymaking related to digitisation (including Internet governance) over the past twenty years, but there are still barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner.

,

There is a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

Calls to Action

Invest in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence.

,

Stakeholders are encouraged to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future.

Session Report

Session Summary Report

 

As part of the 2023 UN Internet Governance Forum, held in Kyoto, Japan from October 9th to October 12, the African Union Development Agency (AUDA-NEPAD) organized an open forum on Whose Internet? Towards a Feminist Digital Future for Africa, on October 12. The session invited experts from the digital and policy sector to a panel discussion on opportunities and challenges faced by women, working in Africa’s digital economy and their role in shaping Africa’s digital transformation.

 

The session was hosted and moderated by Dr Towela Nyirenda-Jere of AUDA-NEPAD’s Economic Integration Division, supported by Alice Munyua, the Senior Director for Africa Mradi at Mozilla Corporation on-site.

 

Alice Munyua from Mozilla Corporation and Liz Orembo from Research ICT Africa (RIA) opened the discussion by sharing powerful personal testimonies, illustrating their experiences as women and female leaders in Africa’s digital sphere. Their reports highlighted the (mis)perception of female expertise and importance of female role models in digital spaces. Building on their reports, Bonnita Nyamwire from Pollicy and Dr. Nnenna Ifeanyi-Ajufo, Professor of Technology Law, shared and discussed research findings on threats of online gender-based violence, barriers faced by women in Africa’s digital economy and learnings on good practices and policy implications for ensuring safe digital spaces and socio-digital equality for women on the continent. Dr. Tobias Thiel from GIZ concluded the discussion by emphasizing Germany’s commitment towards feminist development policies and its continuous efforts to eliminate discriminatory structures for women, girls, and marginalized groups within the African Digitalization and Data sphere. All panelists highlighted the barriers women remain to face when working in digital sectors and emphasized the need to leverage women’s opportunities and participation to ensure an inclusive African Digital Transformation.

 

Participants off- and online actively engaged in the discussion and emphasized panelists’ statements by sharing their own experiences as leading female experts in the field. The interactive discussion underlined the importance of creating safe spaces and called for policymakers to ensure the inclusion of female voices in shaping policies that ensure a fair and just digital transformation in Africa. 

 

Panelists and the audience called for investing in developing more meaningfully and diverse research and advocacy agendas pertaining to women and feminist that extend beyond online gender-based violence. Panelists and audience also encouraged stakeholders to continue investing in capacity-building for African women. Women who are currently actively engaging in digital policymaking and Internet governance platforms should continue to actively open up spaces for new and young women leaders who can actively participate in these conversations and discussions in the future. Finally, the panel-discussion called on every person to consider their own unique commitment towards advocating for advancing socio-digital equality for women on the continent and beyond and take tangible steps towards realizing these goals.

 

In conclusion, the session identified several key takeaways from the panel discussion and subsequent round of contributions from the audience: While it might have become progressively easier for women to participate meaningfully in policymaking related to digitalization (including Internet governance) over the past twenty years, there are still many barriers to overcome and to address in order to make women’s voices heard and needs met in a comprehensive and not tokenistic manner. In addition, the discussion identified a need for diversifying and deepening conversations, perspectives, terminology, and research about feminist priorities in the Internet space in order to move beyond a common focus on challenges pertaining to online gender-based violence and related issues, to broader dimensions that shape socio-digital inequalities that continue to impact women’s experiences in Africa.

 

 

 

IGF 2023 Lightning Talk #97 Combating information pollution with digital public goods

Updated: Sun, 29/10/2023 - 23:23
Global Digital Governance & Cooperation
Key Takeaways:

Highlight tools that are lesser known on misinformation and disinformation.

,

There was interest in what digital public goods were and how they could be implemented

Calls to Action

Provide more hands on opportunities to interact with tools and perhaps a demo could be effective.

,

A broader understanding on what digital public goods are is needed to ensure we can support the prevention of disinformation and misinformation

Session Report

Combating information pollution with digital public goods report

This lighting talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

 

This session focused on how digital technologies are essential parts of our lives and provide solutions to some of the world’s greatest challenges, we must urgently recognize, and help solve their downsides. This is particularly true regarding online information pollution, which has grown to be a cause of distrust and obfuscation. During this session the speakers provided an overview on how policies are needed to combate deep fakes, analyze online news media, verifying crowdsourced data, monitor technology companies’ legal terms, improve access to government policies and lastly, gain insights into the influence of digital technologies on societal conflict.

Mis- and disinformation are typically addressed through reactive measures against specific attacks or proactive prevention efforts. While these approaches are necessary and valuable, they are inherently endless and fail to address the root of the problem. Exploiting vulnerabilities for political gains will always attract malign actors, outnumbering those interested in prevention.

The issue of disinformation arises from vulnerabilities in the tools that mediate the information environment. These vulnerabilities persist because fixing them conflicts with the economic incentives of large platforms. Therefore, it is crucial to increase the costs associated with leaving these vulnerabilities open and provide incentives for their resolution. Alternatively, obligations should be imposed on actors to compel them to address these vulnerabilities.

The session provided two examples with Open Terms Archive publicly records every version of the terms of digital services to enable democratic oversight. They address a critical gap in the ability of activists, journalists, researchers, lawmakers and regulators to analyse and influence the rules of online services. Open Terms Archive enables safety by equipping actors who are already engaged in addressing these vulnerabilities. It amplifies their capabilities and facilitates connections for mutual reinforcement, ultimately enabling more effective action.

The second example is Querido Diario, developed by Open Knowledge Brazil, it addresses the challenge of accessing and analyses official decision-making acts throughout Brazil’s cities. With no centralised platform available, the only reliable source of information is in the closed and unstructured PDF files of official gazettes where they are published. To tackle this information gap, Querido Diario’s robots help collect, process, and openly share these acts. Launched over a year ago, it has grown into a comprehensive repository with more than 180,000 files, continuously updated with daily collections. Querido Diario helps combat information pollution by providing a transparent and reliable source of data that can be used to fact-check and counter false narratives, enabling informed analysis and promoting accountability. The primary users are researchers, journalists, scientists, and public policy makers and it helps benefit various sectors including environmental researchers and journalists, education NGOs, and scientists working with public data. Today, Querido Diario’s coverage reaches 67 cities, where 47 million people live. The next steps involve scaling up to include all 26 Brazilian states and at least 250 cities. The project aspires to incorporate Natural Language Processing models and integrate its data with other public datasets, helping users contextualise information even more.

Finally we closed with a discussion on a gradient approach to AI openness. The DPGA developed an exploratory framework to assess this uses cases of AI where full openness was not possible or not desirable. The audience were interested in the use of AI and preventing misinformation and disinformation which we aim to explore in future sessions.
 

IGF 2023 Day 0 Event #182 Digital Public Goods and the Challenges with Discoverability

Updated: Sun, 29/10/2023 - 22:56
Digital Divides & Inclusion
Key Takeaways:

Take away 1: Attendees asked thoughtful questions on how to ensure digital public goods will not be misused by bad actors. This is a challenged would be a great next session on how to explore ways to encourage proper use of open source tools.

,

Take away 2: There was extensive conversation on capacity building on not just hard technical skills but also on soft policies that impact the implementation of digital public goods within a region.

Calls to Action

There is extensive interest to explore ways how digital public goods is used and how to prevent actors from using the tools that create harm.

,

Explore a way for simplified implementation process and a way for software developers to contribute.

Session Report

Digital Public Goods and the Challenges with Discoverability report

Summary of session

This session focused on the challenges of discoverability for digital public goods (DPGs) for governments and civil society to understand and implement. The talk opened with an overview of the Digital Public Goods Alliance (DPGA) which is a multi-stakeholder initiative to accelerate attainment of the sustainable development goals by facilitating the discovery, development, use of and investment in digital public goods. The DPGA “defines digital public goods as open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the Sustainable Development Goals (SDGs).” An example of a DPG is District Health Information System 2 (DHIS2), is the world's largest health management information system platform. This was followed by an overview of GitHub. GitHub is a complete software developer platform to build, scale, and deliver secure software with 100+ million software developers and used by 4+ million organizations from governments to international development organizations. Open source software like digital public goods are built on GitHub. 

One key element of this session was to provide more background on what open source in the social sector means. Open source refers to software whose source code is freely available to the public, allowing anyone to view, use, modify, and distribute it. This means that the software can be improved and customized by anyone who has the necessary skills, and that it can be used for a variety of purposes without any restrictions. Open source software is often developed collaboratively by a community, and is typically distributed under a license that ensures that it remains open and free to use. Open source in the social sector is defined as software built with relevance to Sustainable Development Goals that do no harm by design and driven by a desire to increase transparency, accountability, and participation, and to empower individuals and organizations to work together to address social and environmental challenges.

This led us to discuss policies that can help improve discoverability and tools: Public & Private sector partnerships; Collaborative Platforms; Metadata Standards; Long-Term Sustainability Plans; Feedback and Improvement Loops; Interoperability Standards. 

Finally the session concludes with five simple rules for improving discovery:

  • Rule 1: Decide what level of access you can provide for partners
  • Rule 2: Deposit your DPGs in multiple trusted repositories for access, preservation, and reuse. 
  • Rule 3: Create thoughtful and rich metadata - consider the FAIR Data Principles
  • Rule 4: Localize the tools for cross-domain integration 
  • Rule 5: Ensure accessibility and inclusion for ease of access

In conclusion, this was a great session that encouraged roundtable discussions and attendees raised questions on ensuring security of open source; issues on preventing bad actors in using the open source digital public good tools and the challenges in capacity building. As a result of this session, GitHub has launched a microsite to encourage software developers to contribute to DPGs here: https://forgoodfirstissue.dev/.

 

IGF 2023 WS #494 Strengthening Worker Autonomy in the Modern Workplace

Updated: Sun, 29/10/2023 - 00:11
Global Digital Governance & Cooperation
Key Takeaways:
  • Exploitation and Inequality: Emerging technologies like AI intensify labor exploitation and escalate global inequality. The business models of companies using these tools can compromise social protection rights, as they often fail to offer decent working conditions. Vulnerable groups, including refugees, are increasingly exploited to refine AI datasets.
,
  • Policy and Regulation Concerns: Urgent policy reform is needed to ensure adequate transparency between employers and workers regarding technology use in workplaces. Strong workplace privacy regulations are essential to prevent unwarranted data collection, protect personal information, and to guard against the deployment of unsound analytical tools.
Calls to Action
  • Establish and Enforce Robust Regulatory Frameworks for Worker Protection and Privacy: Develop and enforce detailed, internationally-harmonized workplace data protection and privacy regulation to protect workers, including low-paid workers, vulnerable workers, and hidden labor in the gig economy.
,
  • Foster Industry Accountability Initiatives: Establish frameworks and bodies that scrutinize and shine a light on corporate actions, ensuring that employers across all sectors adhere to high ethical, socio-economic, and environmental standards.
Session Report

The speakers presented insights into the gig economy, the future of work, the impact of Artificial Intelligence on labor rights, and corporate accountability in the context of achieving Sustainable Development Goal 8 (Decent Work and Economic Growth).

Gig Economy:

  • Globally, platform-mediated gig workers face challenges including: low pay, long hours, lack of job security, and the absence of social protections. Case studies were presented from India and Paraguay. 
  • Gig workers face exacerbated problems due to the lack of data protection laws and regulations which apply in the workplace, and a lack of meaningful anti-discrimination regulations safeguarding independent contractors and freelance workers.

Labor Rights and Corporate Accountability:

  • While there are supportive measures for labor rights in some jurisdictions, implementation issues and challenges persist. The Covid-19 pandemic revealed the inadequacy of support for gig workers, highlighting the need for a better safety net.
  • Data protection laws and regulations are crucial to preventing the potential misuse of data collected in the workplace. At the same time, there is a need for worker autonomy in the digital age, especially in surveillance-heavy environments.
  • The concentration of power in the data brokerage industry, market dynamics, and acquisitions raise concerns about transparency, competition, and data privacy.
  • There were calls for greater accountability in venture capital and early-stage interventions in private markets. There is a need for more transparency in companies' developmental stages and more consultation with impacted workers.

Venture Capital and Economic Growth:

  • The venture capital ecosystem remains insular, favoring established networks. Only 7% of female founders globally receive backing from VC firms, pointing to a significant gender disparity in entrepreneurial support, and many problematic workplace surveillance technologies are being developed by men.
  • Platform cooperativism is a potential solution. Governments should promote the creation of fairer work platforms by the workers themselves.

Global Initiatives:

  • UN instruments like the Global Digital Compact, and the WSIS+20 Review, are positioned as tools that could aid in achieving the objectives of SDG 8.
IGF 2023 DC-SIG Involving Schools of Internet Governance in achieving SDGs

Updated: Sat, 28/10/2023 - 17:52
Key Takeaways:

- Issues involving SDGs are considered in many schools. This meeting heard reports on: SDG 5 on Gender, SDG 7 on access to energy, SDG 16 on Pearce and Justice. In follow up discussions, 1.8 in terms of economic aspects and 9.5 in terms of access were also discussed.

,

- SIGs are becoming reference resources on IG in many countries on topics such as: cybersecurity and regulatory frameworks. These can serve to bring clarity to the IG understanding in a country among citizens and government officials.

Calls to Action

- While SIGs discuss topics concerning SDGs, they do not always do so explicitly. While each of the schools decides on its own curricula and modalities, doing so explicitly could be considered in future courses.

,

- While the SIGs can have a well established curricula they can also adapt the content to special target groups to produce flexible and adaptable content. The SIGs can share their resources on the DC SIGs wiki and website provided by the Dynamic Coalition to help others and to promote their own efforts and achievements.

Session Report

 

Session Presentation

Schools on Internet Governance (SIGs) are an important initiative that help with creating and strengthening capacity in Internet Governance. Regional SIGs have been operating in all the regions of the world, while national SIGs exist in many, but not all, countries. The DC-SIG provides a common platform where SIGs can discuss matters of their interest, share information, share innovations and discuss adaptive mechanisms as they evolve. While the global pandemic did adversely impact many SIGs, most are now back in a fully functional manner.

This session took stock of the current status of SIGs, support community members who want to establish SIGs in countries that do not have them, and examined how SIGs can improve themselves by adapting new programmes and courses.

As part of each yearly meeting, the DC-SIG takes on a topic of specific interest for discussion and further development of plans. This year, the DC looked at how the DC SIG can contribute to developing curricula in support of SDGs as the focus.

1- Slideshow of existing SIGs was shown and a presentation of the recently formed Japan SIG. New schools were given a chance to describe their schools.

2- Schools on Internet Governance (SIGs) and their impact to achieve the Sustainable Development Goals (SDGs)SDG 5,7 and 16)

SDG 5 on gender equality. 

  • Ms Sandra Hoferichter (EuroSSIG)

Schools on Internet Governance (SIGs) contribute to this SDG because they are inclusive and the thematics are various. SIGs are a good effort  to fill the gender gap in education and to help promote women in leadership positions. For many years the application numbers of the EuroSSIG  show that more women are interested in these  topics.

  • Anriette Esterhuysen: AfriSIG addresses SDG 5 through developing women as leaders in IG and by including gender specific topics in the programme. Examples would be sessions on online gender-based violence and on the gender digital divide and how to respond.
     
  • Ashrafur Rahman Piaus (bdSIG)
    Bangladesh SIG works with the rural people on the SDG 5 and 9 by having women in their school and helping them achieve including transgender and many other marginalized community also 

SDG 7 on access to energy 

  • Ms Olga Cavalli (South SIG and Argentina SIG)

Access to energy has a great link with climate change. So in this SIG they have a few panels discussing the impact of consuming energy. The other aspect of energy, it’s important to notice that there is a gap between some areas which have access to energy and others don’t. In the SIG, they talked with different experts and panelists about this issue.

Other SDGs

  • Mr Alexander Isavnin (Russia SIG) speaks on SDG about peace and justiceThe SIGs can help build new standards. Help enforce the multistakeholder process like in ICANN. Also enforces inclusion and effectiveness.
  • SDG 8.6 Pakistan SIG conducts a session on digital entrepreneurship inspiring the youth to capitalize on the economic opportunities on  the internet.  For the SDG 9.5 (c), Access to the internet, they organize sessions on Access and Inclusion where Government and private sector brief the audience about their plans for expansion of ICT services and state of infrastructure in that city/area where school is being held (pkSIG is held at a different city every year). 
  • Some SIGs sometimes discuss topics about SDGs but not all the timeSo it is a good point to dive in after this session to see how the SIGs are promoted and present in Japan  for example.
  • Abdeldjalil Bachar Bong for Chad SIG point is that  every SIG in their own and specific way already contributes  to the SDG topics  

Roundtable  Discussion on the evolution of SIGs

  • SIGs are becoming references on IG in many countries on different topics : cybersecurity, regulations, and need to bring clarity to the IG understandings
  • The SIGs can have a root in a solid curriculum and then adapt the content to a special target group to produce flexible and adaptable content. 
  • The SIGs  can share their resources on the SIGs wiki and website to help others and promote their own achievements. This may align with the concept of open education. 
  • There are different types of SIGs who cater for different groups of people.

 

IGF 2023 Lightning Talk #116 Canada’s Approach to Regulating Online Safety

Updated: Fri, 27/10/2023 - 20:34
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

In Canada, there is significant interest to regulate serious harms that results from online interaction, with many recognizing a need for a systems approach, as opposed to one just focused on individual-level content

,

The direction of legislative design seen in various governments are, in many cases, reflective of the legislative context (existing legislation, constitutional provision) that creates legislative constraint, than differences in fundamental opinions

Calls to Action

For regulators creating regulations on online harm to be clear of legislative intent, and focus on solving that specific legislative intent as opposed to other potential unachievable goals

,

For conversations to be clearly centered on the experiences of harm as experienced by people living within that jurisdiction

Session Report

In the session on online harms in Canada, we started by discussing the Canadian definition surrounding online harm, reminding participants that the talk was centered on Canadian usage of terms, which may differ from how the same term is used in other jurisdictions, inviting participants to stop the presenter and ask questions if there were any points that were unclear. We then defined online harms to mean financial, physical, psychological, and emotion harm that results from interactions that take place through the internet, whether they respect local, regional, or national borders. We then listed a number of examples of online harm, making clear that some instances of it (such as child sexual exploitation material) was illegal under existing legal framework, while some (such as misinformation) was harmful but legal.

We then moved to a discussion of the results arising from the survey of Canadians’ experience in online harm, demonstrating a significant number of Canadians are exposed to harmful content frequently. In particular, we noted that while many Canadians saw individuals as being largely responsible for generating harmful contents, they did not see individuals as being primarily responsible for reducing the amount of harmful content online, instead seeing a larger role played by online platform and the government in solving such. This particular finding was discussed in detail, in particular as informing public policy conversation on the topic.

We then moved to a discussion of the current legislative creation process taking place in Canada to tackle online harms, situating the potential legislation within a slew of legislative activity that has occurred in the past 3 years that concerns internet governance and digital economy broadly, stressing the fact that efforts to tackle online harms in Canada cannot be understood in isolation. From that point, a deeper exploration of regulatory tension surrounding online harms legislation followed, focusing particular on how it interacts with public sentiment held in Canada, as well as the law’s potential impacts on the preferred economic system, as well as other existing legislation (including constitutional law - in Canada in the form of the Charter of Rights and Freedoms) as directing the potential direction the legislation might take. The formal presentation finished with situating the Canadian conversation in a global context, stressing that while there are no unified approach to tackling online harm, many deviations seen globally likely may not reflect irreconcilable fundamental differences in definitions of online harm, but are much more likely to reflect the legislative constraints different country faces, and the possible regulatory action (both from a legal and political perspective) one can take.

After the talk, a number of questions were asked by the participants. One surrounded how legislative action can incorporate the idea of “benign exposure” to less harmful content, as a training to inoculate a user against being exposed to more harmful content. The presenter discussed at length current thinking on that topic in areas of policy approaches to tackling mis and disinformation, including approaches to increase digital media literacy amongst different groups.

IGF 2023 Open Forum #52 RITEC: Prioritizing Child Well-Being in Digital Design

Updated: Fri, 27/10/2023 - 13:58
Human Rights & Freedoms
Key Takeaways:
In addition to the clear and urgent need to identify and address online risks and harms for children associated with the digital environment, sustained multisectoral efforts that prioritize child participation, including research, are required to adequately understand and leverage the positive value that digital experiences can deliver for children’s well-being in a digital age.
Calls to Action
1. To designers of digital play: consider the Responsible Innovation in Technology for Children (RITEC) project outputs, in particular the children’s well-being framework, in your decision-making processes. 2. To governments: consider how to create an enabling environment for businesses to prioritize children’s well-being in digital design.
Session Report

RITEC: Prioritizing Child Well-Being in Digital Design

Open Forum #52 - Session Summary

Speakers

  • Adam Ingle, The LEGO Group
  • Aditi Singh, Young Advocate, Dream Esports India and Esports Monk
  • Professor Amanda Third, Western Sydney University
  • Sabrina Vorbau, EUN
  • Shuli Gilutz, PhD, UNICEF 

Purpose: The session introduced the concept of well-being for children in the digital age before going on to examine its importance when we consider the centrality of digital technologies in children’s lives and the rapidly growing concerns around online harms. 

Part 1: Setting the scene on child safety and well-being in a digital age

This part commenced with Aditi Singh, Young Advocate, describing her own experiences with online gaming and how, from a young age, games pushed her critical thinking and collaboration skills and enabled her to grow intellectually and socially. However, Aditi also described the harms, particularly those related to being a young woman online, associated with gaming. This includes how she, and other children, often don’t understand the risks of sharing personal information and prevalence of gender-based harassment.

Aditi then discussed how forums, like the UNICEF Game Changers Coalition, has helped her and others reimagine the role of women in online gaming and drive the design of games to make them more age-appropriate spaces. Aditi called for governments and other bodies to incentivize private firms to build experiences with children at their core and how platforms themselves need to realize that their choices can unlock the benefits of games while minimizing the risk.

Sabrina Vorbau from European Schoolnet followed Aditi, discussing the EU’s revised Better Internet for Kids (BIK) strategy and how the revision process ensured the new BIK onboarded diverse views, including those of children which were instrumental to shaping the strategy. Ultimately this ensured the strategy adopted a more modern approach to promoting protection, empowerment and participation of children online. Sarbina highlighted how young voices also helped inform the Safer Internet Forum conference, informing important matters like topics, speakers and themes. Sabrina reinforced the need to educate with young people, not simply to them or for them.

Shuli Gilutz began to discuss how design philosophies within industry are critical to embedding digital well-being into online play. Shuli unpacked the concept for ‘well-being’, noting that it’s about the subjective experiences of children and includes not just safety but also outcomes like empowerment and creativity. Shuli described how RITEC is working with designers to develop a guide for business, giving them the tools to create positive digital experiences that are safe, private but also advance well-being.

Part 2: the RITEC project

Adam Ingle provided an industry perspective of why designing for children’s experiences is critical, discussing how the LEGO Group is embedding the concept in its own online play products. Adam highlighted that the RITEC project is about developing an empirical basis for understanding what digital well-being looks like while also creating the tools to proliferate responsible design throughout industry. Adam discussed the LEGO Group’s internal processes that helped the company implement best practice, this includes incorporating the views of child rights experts in product development processes, adopting clear digital design principles built around well-being as well as ensuring business metrics and KPIs also measure success against well-being. Adam concluded by noting that it’s not just about equipping businesses with design tools, but that cultural change is also needed to lift industry standards.

Amanda Third introduced the RITEC project itself, based on engagement of almost 400 children (predominately from the global south) and driven by their own views on digital play. Crucially, the project revealed that digital play brings joy and satisfaction and that children experienced many benefits – particularly through fostering social connection and promoting creativity. They are however conscious of the dangers and expect governments and firms to protect them.

Amanda noted how the perspectives of children informed design of a well-being framework with eight components (competence, emotional regulation, empowerment, social connection, creativity, safety and security, diversity, equity and inclusion and self-actualization). The project has also developed metrics to determine whether digital play experiences are meeting the above eight components of well-being, so it’s a practical, measurable framework and not just an abstract one. Amanda concluded by reinforcing the benefits of online play for children but also the criticality of involving children in research.

Shuli noted the next steps for the RITEC project, which includes the guide for business that summarizes the research and makes the findings actionable. Project managers are building the guidance with feedback from designers to ensure the tools speak design language and can be adopted with relative ease.

Panelists were asked to each note a critical action for embedding responsible digital design. Sabrina highlighted the importance of youth participation and including young voices in policy design. Adam emphasized the need for policymakers to adopt a holistic approach to online regulation, that balanced both harms and benefits and incentivizes firms to design for well-being. Shuli stated that industry needs to pivot towards more holistic design philosophies, including empowerment rather than just engagement. Amanda cautioned that we should also recognize the limits of design and how it’s one part of a wider solution that includes cultural change and education.

QUESTIONS AND DISCUSSION:

How do we reach a true representational group of young people? Amanda noted that it’s important to reach to partner organizations who have expertise in engaging vulnerable and diverse perspectives but also there isn’t a perfect research method for participation, and we all need to move forward consciously.

How do we design for the evolving capacities of children? It was noted that regulatory frameworks require firms to consider the different capacities of children and Adam discussed how clever technical design can ensure that, for example, social settings are more limited for younger ages but expand for older ages who can engage with strangers in a more mature way (and with less risk).

What is the role of parents and educators and how does the framework include them? Shuli noted that the main recommendations for parents are (1) play with your kids - once you play with your kids you understand the benefits and risks and that helps the discussion happen, (2) also talk to children what you, as a parent, are worried about. Sabrina noted the conversations between parents and children about online safety is critical.

 

IGF 2023 Town Hall #39 Elections and the Internet: free, fair and open?

Updated: Fri, 27/10/2023 - 13:12
Human Rights & Freedoms
Key Takeaways:

Importance of multi stakeholder approach but recognition of the lack of government/private sector engagement, in Africa region in particular, which leads to isolation and an inability to effectively moderate content. This can lead to the common use of Internet shutdowns as a means of addressing content issues such as hate speech, which is not the solution.

,

Whilst some governments may lack the tools, knowledge, digital literacy and access to the wider multi-stakeholder community to address issues of concern through effective content moderation, shutting down the internet does not address the root causes and only creates more problems, including undermining rights and the prosperity of a society. Internet shutdowns are also widely used as a deliberate tool for controlling the free flow of information

Calls to Action

Call on governments to cease use of the blunt tool of internet shutdowns which impedes the free flow of information during electoral periods, and threatens human rights and the democratic process as a whole.

,

Reinforce the importance of planning ahead through narrative and risk forcasting to pre-empt and mitigate shutdowns, with a view to developing knowledge and literacy around other means for addressing the issues Governments state they are addressing by shutting down the internet (e.g. hate speech). Addressing one problem by creating another is not the answer and the multi stakeholder community must continue to challenge the narrative.

Session Report

This session was facilitated by the FOC Task Force on Internet Shutdowns (TFIS), co-Chaired by the U.K. and Freedom Online Coalition-Advisory Network members Access Now and the Global Network Initiative. The session examined causes, trends and impacts of Internet shutdowns and disruptions, and explored how the multistakeholder community can work together to anticipate, prepare for, and where possible prevent Internet shutdowns before they occur, with a focus on identifying practical steps that can be taken ahead of ‘high risk’ elections in 2024.

Kanbar Hossein-Bor, Deputy Director of Democratic Governance & Media Freedom at the U.K. Foreign, Commonwealth & Development Office, provided opening remarks, noting that Internet shutdowns pose a significant threat to the free flow of information and are a fundamental impediment to the ability to exercise human rights, underscoring the importance of a multistakeholder approach to addressing these challenges. Mr. Hossein-Bor highlighted the Freedom Online Coalition (FOC) Joint Statement on Internet Shutdowns and Elections, launched during the session, which calls on States to refrain from shutting down the Internet and digital communications platforms amid electoral periods, as aligned with States’ international human rights obligations and commitments.

Speakers underlined the critical role access to the Internet and digital media platforms play in promoting free, transparent, and fair electoral processes. Panellists spoke on the negative reality of Internet shutdowns and their impact, noting its destructive consequences on economic prosperity and access to health care, as well as obscuring human rights violations. Panellists highlighted how Internet disruptions and preventing access to platforms during election periods are often justified by governments as a means to ensure national security and to mitigate disinformation, even though shutdowns and disruptions have proven to further exacerbate security risks, especially among already vulnerable groups. Speakers also highlighted big tech companies’ lack of engagement and product oversight in local contexts (e.g. hate speech moderation in local languages). Additionally, when examining government use of Internet shutdowns, panellists flagged governments’ lack of knowledge and experience regarding alternative tools to address security concerns amid elections in contexts of violence. In these contexts, full and partial shutdowns were used as a form of resistance and expression of sovereignty by governments in response to companies and systems they felt powerless to and did not know how to engage with. In addition to underlining the need for a multistakeholder approach and calling on telecommunications and digital media companies to ensure people have access to a secure, open, free, and inclusive Internet throughout electoral processes, panellists also recognised the role of disinformation as a risk cited by governments to justify Internet shutdowns and disruptions during elections. In order to address this challenge, speakers noted the following recommendations:

● Narrative forecasting: Anticipating the types of narratives that may be deployed at different points in the electoral process, and preparing a response;

● Overcoming selection bias: Finding ways to bring fact-based information into the right spaces;

● Preemptive responses to disinformation: Drafting preemptive responses to disinformation in order to reduce response time and minimise the spread of disinformation.

● Collaboration between civil society and Big Tech: Encouraging collaboration between local civil society organisations and big tech companies to address online content moderation in local contexts.

During the Q&A session, audience members enquired about government and civil society strategies to address and prevent Internet shutdowns, emphasising additional considerations to take into account when seeking to promote fair and open elections.

The U.K. closed the session by reiterating the importance of 2024 as a key election year, and also highlighted the publication of the Oxford Statement on the Universal Access to Information and Digital connectivity, developed following the Global Conference for the International Day for Universal Access to Information 2023.

IGF 2023 Open Forum #163 Technology and Human Rights Due Diligence at the UN

Updated: Fri, 27/10/2023 - 12:50
Global Digital Governance & Cooperation
Key Takeaways:

- Access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance.

Calls to Action

- Emphasize the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on as they implement next stages on the policy guidance.

Session Report

The UN is developing guidance note on human rights due diligence guidance for its use of digital technology. This process has included consultations with internal and external partners, helping mainstream human rights due diligence and align approaches across the UN system. The guidance, undergoing multiple drafts, aims to be inclusive and address different impacts, especially on gender and intersectionality. It will be considered for implementation across the UN system following feedback and endorsement.

UNHCR is actively applying human rights due diligence in its digital technology use, focusing on complex settings. They have a range of policies and are working on a formal framework to align with international human rights and ethical standards. They have been involved in developing the guidance through case studies and strategic partnerships, and the guidance has evolved to become more implementable. UNHCR plans to incorporate the guidance into their digital strategies.

The World Bank commends the principles-based approach but emphasizes the need to consider different levels of development and maturity among member states, stressing the importance of adapting the guidance to each country's specific context while maintaining universal principles.

Access Now highlights that access to effective remedy is crucial, noting the impact of technologies on marginalized and vulnerable populations. There is a need to build in elements of independent assessment for oversight and accountability reasons. Transparency on the process and practice and continued engagement with civil society are key. Effective enforcement is also a key element to the success of this guidance, as well as transparency in private-public partnerships.

The session concluded with OHCHR emphasizing the need to take seriously the questions raised in the discussion on transparency, independent assessments, and enforcement for the HRDD Policy Working Group to take on board as they implement next stages on the policy guidance.

IGF 2023 DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News

Updated: Fri, 27/10/2023 - 01:28
Human Rights & Freedoms
Key Takeaways:

Data, access, and transparency are fundamental to the sustainability of news and internet governance. However, data access discrepancies around the world, especially in Global South regions, limit the capacity of research, analysis and reporting about the impact that digital platforms have on news and journalism sustainability, as well as on society as a whole.

,

The global reach of supranational policies might require regional/local parties to comply with rules originated elsewhere. The session acknowledged the interconnection of local issues with global ramifications and vice-versa, and stressed the importance of ensuring representation and access to digital policy discussions in all levels for those communities and sectors that will be most affected by these initiatives.

Calls to Action

To Intergovernmental Organizations: Allocate resources and initiatives to enhance participation and access for underrepresented communities, ensuring their voices are heard in global internet policy discussions, including on data privacy, news sustainability, and generative AI, and that their perspectives are taken into account when drafting resolutions, policies, and guidelines.

,

To Private Sector: Ensure that the implementation of internal policies created in compliance with international or supranational bodies take into account the diversity of local context. Engage with local stakeholders, media organizations, journalists, and their communities to address the local implications of global digital policy frameworks.

Session Report

Introduction

The DC-Sustainability coordinators, Daniel O’Maley, Waqas Naeem and Courtney Radsch opened the session by underscoring the significance of balancing technology innovation governance with the sustainability of journalism and news media. The key highlight for the year was the dynamic coalition's focus on data transparency and access as vital elements for media sustainability. The coalition's annual report was launched during the session, a collaborative endeavor that offers a snapshot of the critical issues facing the news media industry. The report spotlighted topics like the power imbalances between media and tech giants, the dual nature of government regulations impacting media, and the challenges and opportunities presented by technological innovations, such as generative AI.

In the first section of the session, authors of the report presented their chapters: Prue Clarke (New Narratives - Australia), Mike Harris (Exonym - United Kingdom), Juliana Harsianti (Independent Journalist and Researcher - Indonesia) and Juliet Nanfuka (CIPESA - Uganda). Following the presentations of each chapter, members of the DC-Sustainability took the floor to present their work: Michael Markovitz (GIBS Media Leadership Think Tank - South Africa), Ana Cristina Ruelas (UNESCO - Mexico), Julius Endert (DW Academy - Germany), Michael Bak (Forum on Information and Democracy - France), Sabhanaz Rashid Diya (Global Tech Institute - Bangladesh) and Ramiro Alvarez (CELE - Argentina). The session concluded with an open discussion with the audience.

Global influence of EU/US policies

A key topic was the overarching effect of policies and tech companies from powerhouses like the EU and the US on the global digital space. Despite being localized, their ripple effect transcends borders, impacting organizations working in so-called “Global South” countries. These organisations often find themselves grappling with the daunting task of compliance, struggling to decipher a logic they didn't create and can't control. Notably, these policies (both from companies and governments) play a pivotal role in shaping how journalists and media outlets operate, offering them limited avenues to challenge the tech giants. Courtney Radsch elaborated on these techno-legal systems, emphasizing the major influence of US and EU-based tech platforms on global media. These platforms determine how content rules and policies, such as the DMCA and GDPR, are implemented. Tying into the conversation on how centralized internet governance has impacted media visibility and sustainability, Mike Harris spoke about the importance of decentralized rulebook systems to empower news media, especially in the face of challenges from large online platforms. Juliana Harsianti shed light on the evolution of digital technology in Indonesia, emphasizing the implications of regulations intended for e-commerce now being used to restrict journalistic freedom. 

Digital Equity: Paving the Way for Sustainable Journalism

Data stands as the backbone of informed decision-making in today's digital realm. Gathering the right data is the first hurdle. With tech platforms influencing the visibility and viability of content, there's an undeniable need for a coordinated approach to collect and utilize data. Such data can aid in understanding audience behaviors, advertising strategies, and the effectiveness of content distribution methods. Ensuring a fair compensation model, bolstered by clear data-driven strategies, can pave the way for the sustainability of quality journalism. In that regard, via a written statement, Michael Markovitz the Conference held in July, “Big Tech and Journalism - Building a Sustainable Future for the Global South” which culminated in the adoption of the Principles for Fair Compensation, aimed to be a framework for the design of policy mechanisms seeking to address media sustainability through competition or regulatory approaches.

Prue Clark spotlighted the disparity faced by countries like Liberia in the digital age. The challenges faced by media in such countries, from a lack of digital monetization knowledge to reliance on government support, are evident. Juliet Nanfuka offered a parallel from Uganda, emphasizing the hesitancy in the media's approach to AI, despite the challenges they face. Both Clark and Nanfuka highlighted the struggles and gaps in media adaptation and digital training in lower-income countries.

Daniel O’Maley emphasized the transformative power of data sharing, stressing the importance of understanding which data is essential for different sectors. He talks about the implications of data transparency policies, especially considering their global impact.

While Nanfuka highlighted the challenges of integrating new technology into media spaces that are already grappling with other significant issues, Julius Ender dived into the transformative power of AI in media, emphasizing the importance of AI literacy. Both Ender and Nanfuka conveyed the urgency for media sectors, especially in developing countries, to understand and adapt to AI's growing influence.

Regional Focus vs. Global Perspective:

During the Members’ Spotlight, Sabhanaz Rashid Diya offered insight into the mission of the Tech Global Institute to bridge the equity gap between the global South and dominant tech platforms. Ramiro Alvarez provided a deep dive into the media landscape of Latin America, emphasizing the influence of state-driven media and the need for more open dialogue. This regional focus complements the broader global themes discussed, reinforcing the idea that global digital governance challenges often manifest in unique regional ways. Despite the fact that the media landscape varies by region and country, there are common threads of challenge and opportunity related to digital governance, sustainability, and the integration of new technologies.

Conclusion and next steps

Overall, the session emphasized the value of global collaboration grounded in local insights. It's not just about dissecting EU or US policies, but also diving deep into what's happening in places like Uganda and Liberia. The local challenges faced in these regions have global implications, reinforcing the need for an inclusive approach in policy discussions.

While the EU and US might often take center stage due to their significant influence, the collective effort should focus on ensuring that voices from all corners of the world are heard: Global strategies must be informed by local knowledge and experiences. 

In the coming months, DC-Sustainability members will meet again to shape the priorities for the year ahead, especially when it comes to envisioning AI governance and its impact in the media. The goal is to ensure that as the world of journalism evolves, it remains rooted in authenticity, inclusivity, and the pursuit of truth.

IGF 2023 Networking Session #78 Governing Tech for Peace: a Multistakeholder Approach

Updated: Fri, 27/10/2023 - 01:12
Global Digital Governance & Cooperation
Key Takeaways:

While some perceive technology as a threat to peace (cyber vulnerabilities, privacy and discrimination issues, disinformation and polarisation on digital platforms, trust in information and data undermined by AI), digital technology should also be seen as a peace-enhancing factor, if properly governed by avoiding "tech-solutionism" and adopting an inclusive, multistakeholder approach to implementing PeaceTech initiatives.

,

We need to move from "coercive peace" (tech for security and stability) to "persuasive peace" (tech and data to promote social cohesion). We need human rights due diligence for the procurement process of tech solutions: tech that violates human rights, dignity and freedom should not be called PeaceTech. To enhance social trust, we should regulate processes rather than content, so that the Internet can become truly transparent and accountable.

Calls to Action

To bring together different stakeholders (governments, tech-companies, NGOs, academia) to discuss the potentials and challenges of PeaceTech, define key areas of intervention, and implement collaborative projects to enhance peace and social cohesion via the safe and responsible use of frontier technologies.

Session Report

The Networking Session started with a round of introduction, the participants were from different sectors, but their common thread was using technology for peace and sustainable development. In the beginning of the discussion, the participants tackled the definition of peace, as an important first step in determining the role of technology in its enhancement. Human rights were mentioned as a necessary, but not sufficient condition for peace, along with other criteria such as the positive definition of peace according to which peace implies attitudes, institutions and structures that create and sustain peaceful societies, rather than mere absence of violence. When it comes to the relationship between technology and peace, the participants identified both positive and negative impacts of tech to peace. As PeaceTech advocates using technology as a tool to achieve peace, it should be avoided to associate PeaceTech with any technology that violates human rights and dignity and endangers people’s freedom. In line with that, the participants commented on the need for moving from coercive peace, which entails using tech centrally to obtain security and stability, to persuasive peace, in which technology and the collected data can be used to advance peace and social cohesion. Building trust and creating a safer space without compromising on freedom of expression was identified as another crucial mission. Having in mind people’s tendency to behave responsibly when they are held accountable for their words and actions, the participants mentioned the need for raising transparency and accountability in the digital environment. An example that came up was the social scoring system in China, relevant both for the trust-building issue and for defining areas that PeaceTech includes. The participants agreed on the importance of bringing together stakeholders from various fields, such as governments, tech-companies, NGOs and academia, as well as from different parts of the world and perspectives. Through this multistakeholder approach, the actors would discuss the potentials and challenges of PeaceTech, areas of possible intervention and implement collaborative projects that would be a contribution to using technology safely and responsibly to improve peace and social cohesion.

IGF 2023 Day 0 Event #189 Women IGF Summit

Updated: Fri, 27/10/2023 - 01:03
AI & Emerging Technologies
Calls to Action

Women IGF should study what are the cost of women exclusion in the digital leadership and spaces, the cost of women’s lack of internet access

,

Women in IGF be recognized as an NRI and inclusive and representative of the global issues.

Session Report

 A call to action is to promote Women IGF globally, to identify and work with ambassadors or champions of internet governance to push for national actions required to empower women and give opportunity to participate as leaders in the Internet Governance and Policy formulation and to be recognized as an NRI at IGF global level. Secondly to support the Feministic Principles inclusion in the Global Digital Compact. 

IGF 2023 Open Forum #98 CGI.br’s Collection on Internet Governance: 5 years later

Updated: Fri, 27/10/2023 - 00:12
Global Digital Governance & Cooperation
Key Takeaways:

Libraries play an important role in providing access to knowledge. CGI.br has been working on implementing a library and many outreach initiatives that can inspire other organizations to make information on Internet governance more accessible.

,

Controlled vocabularies are essential resources for organizing and retrieving information and data on Internet Governance. Regarding this, artificial Intelligence and machine learning tools can be used in order to automatizing taxonomies.

Calls to Action

The IGF space for experts and stakeholders to share insights, best practices, and challenges related to building and maintaining collections in Internet governance.

,

Stakeholders need to cooperate more on building collections on Internet Governance. One essential area of collaboration is the development of taxonomies and vocabularies specific to Internet Governance.

Session Report

The Open Forum "CGI.br’s Collection on Internet Governance: 5 years later" was presented at IGF-2023 in order to continue the discussion that began in 2017 with the Open Forum titled "Memory and documentation in Internet Governance: The challenge of building collections". It had an audience of 12 people and saw five interactions with the audience.

The moderator Vinicius W.O. Santos provided context by explaining that the earlier open forum was co-organized with the Internet Corporation for Assigned Names and Numbers (ICANN) and focused on documentation and preserving institutional information. Additionally, the Brazilian Internet Steering Committee (CGI.br) team shared its initial efforts to create a specialized library in Internet governance.

The Speaker Jean Carlos Ferreira reported on the main activities and progress made since the last Open Forum about the CGI.br collection. He highlighted actions taken within the Brazilian Internet Steering Committee (CGI.br) and The Brazilian Network Information Center (NIC.br) related to producing and sharing information on Internet governance in Brazil.

The presentation mentioned the wide range of materials produced by CGI.br and NIC.br, including books, guides, reports, CGI.br meeting minutes, resolutions, technical notes, and other promotional materials. 

Ferreira described the main pillars of CGI.br's collection:  1) Documentation of CGI.br activities; 2) Publications; and 3) Specialized Physical Library. The project also includes the development of a digital repository that will include all materials from the Brazilian IGF.

Regarding the initiative's challenges, the presentation raised the need to build a multilingual Internet Governance vocabulary for standardized document indexing. Another highlighted challenge referred to implementing and maintaining robust, though complex, open-source tools that facilitate integration with other collections and collaboration with other organizations.

The moderator emphasized the importance of the session, as information organization and dissemination in the Internet Governance area are seldom discussed but vital.

Comments from the audience pointed out the significance of CGI.br's collections as a fundamental role in strengthening the community and knowledge development on Internet Governance in Brazil. One participant drew attention to artificial intelligence and machine learning in document indexing and designing taxonomies. Another participant also mentioned the possibility of using "language models" for term extraction to build a taxonomy. A third participant inquired about lessons learned during the project and tips for institutions interested in implementing similar initiatives. 

The speaker and the audience discussed the need to build an Internet Governance taxonomy for better information organization. Developing this taxonomy is a challenge faced by the Internet Governance community due to the diversity of topics and specializations within this field. Therefore, it is essential to bring together the librarian community, the Internet technical community, and other stakeholders to discuss and create an adequate vocabulary and taxonomy for the Internet Governance area.

The session featured comments from Mr. Winston Roberts, representing the International Federation of Library Associations (IFLA), who mentioned that IFLA is involved in the Internet Governance process, participating as one of the multistakeholder communities. He pointed out the critical role that Internet Governance plays in delivering library services and disseminating information. He emphasized the importance of collaboration and cooperation between libraries and the Internet technical community. He discussed the update of IFLA's Internet Manifesto, encouraging participants to reach out to IFLA and its regional representations in Latin America and the Caribbean for more information.

In conclusion, the open forum fostered an important discussion on the need for collaboration and dialogue within the Internet Governance community to create a taxonomy that addresses Internet Governance topics. It underscored the importance of CGI.br's collections in strengthening knowledge development within the Internet Governance community.

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024 - v.2

Updated: Thu, 26/10/2023 - 23:49
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

An efficient fight against disinformation at elections times requires a framework for a large cooperative between the different stakeholders, a continuous monitoring of the phenomena, and rules for transparency in the different processes. A “big stick” against those who don’t want to play along the rules is also very useful. In case of non-respect of the rules, EU Commission can issue warning letter to fines up to 6% of the global turnover.

,

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity: - the past, i.e. reviewing old electoral campaigns to identify the different strategies - the present, i.e. an evaluation of the main risks, country by country - the future, i.e. how to better prepare the network for the coming campaign.

Calls to Action

Under the guidance of the Commission, EDMO has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

,

One of the new challenges is generative artificial intelligence, which can amplify intentional disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free speech rights, and will need to be clearly identified (watermarking).

Session Report

IGF 2023 Town Hall #162 How prevent external interferences to EU Election 2024

Esteve Sanz in Kyoto and Albin Birger from Brussels, the representatives of the European Commission, stressed that disinformation is false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm. It is not the Commission’s aim to create a ministry of Truth, but to make the online environment more transparent and its actors accountable, to empower citizens, and to foster open democratic debate. One of the new challenges is generative Artificial Intelligence, which can amplify disinformation campaigns: a human centric approach needs to clearly separate human from artificial output. Therefore, AI production will not have copyright or have free sch rights, and should be clearly identifiable (identifying the effective way to do so, for instance through watermarking, remains a challenge).

 

They also presented the articulation of the different EU’s initiatives (regulatory and other) and institutional set up to fight against disinformation:

  • under the DG CNECT, Regulations have been developed and are now in place at EU level (Digital Services Act and Digital Markets Act); The 2022 Code of Practice on Disinformation was strengthened in 2022, empowering industry to adhere to self-regulatory standards to combat disinformation. The Code of Practice aims to be transformed into a Code of Conduct under the DSA (to constitute a risk mitigation tool for Very Large Online Platforms, while remaining voluntary); and the European Digital Media Observatory (EDMO) has been set up to support the creation of a cross-border and multidisciplinary community of independent fact-checkers and academic researchers.
  • under the European External Action Service different strands of work aim to foster international cooperation, to increase situational awareness and coordinate response to Foreign Information Manipulation & Interference (FIMI), including with partner countries, e.g. a Rapid Alert System between EU Member States’ administrations the creation of the EUvsDisinfo database or that  of a FIMI “toolbox”.
  • the DG COMM, provides for internal Commission coordination and factual communication on EU policies, through monitoring  and analysis of related areas,, with an accent on debunking false narratives (e.g. climate change disinformation),  and through the promotion of media literacy initiatives.

 

Specific situations also call for targeted and coordinated actions, e.g. the imposition of EU sanctions on state owned outlets suspending RT and Sputnik’s broadcasting in the EU.

In view of the coming 2024 elections, specific initiatives have been put in place to further cooperation between the different actors:

- within the framework of the Code of Practice there is a Working Group on Elections, with a focus on the activities of the signatories and the facilitation of exchange of information between them

- under the guidance of the Commission, EDMO also has created a task-force covering all EU countries and all EU languages with the involvement of a broad set of stakeholders to carry out a risk assessment, monitor and report on mis/disinformation trends, and increase cooperation between the stakeholders.

Stanislav Matejka, representative of the ERGA, explained that the European Regulators Group for Audiovisual Media Services functions as an expert body, which is also tasked to provide the Commission with essential evaluation of the local implementation of the Code of Conduct, the local respect of the transparency obligations. It coordinates the work of the local authorities to monitor the effective implementation of the European policies in these matters (e.g. the access to data), and handles the repository of political adverts.

Paula Gori, Secretary General of EDMO stressed the necessity of a multidisciplinary approach of the phenomena of disinformation, which required expertise in numerous fields, from emotion analysis to computing, etc. In that sense, EDMO should be considered as a platform offering tools to the experts from the different fields, from fact-checkers to academic research, without forgetting the fundamental promotion of media literacy.

Giovanni Zagni, representative of a member of the network of fact-checkers and chair of the EDMO task force on elections, explained how their work has evolved from the sole analysis of content (which nevertheless remains an important part). For example, they set up a specific task force on Ukraine which led to 10 recommendations to policy makers; they produce a monthly brief on the tactics of disinformation.

Concerning the coming European elections, EDMO set up a specific task force which has three areas of activity:

- the past, i.e. reviewing old electoral campaigns to identify the different strategies

- the present, i.e. an evaluation of the main risks, country by country

- the future, i.e. how to better prepare the network for the coming campaign.

Caroline Greer, representative for TikTok, expressed the support of the company for fact-checking.

Concerning the coming elections, TikTok has a global election integrity program, with a template that is applied to local circumstances. This includes:

- specific election policies

- community guidelines

- a full prohibition of political advertising (at all times)

- a restriction of certain political activities such as funding campaigns

- local “election hubs” that inform citizens about - for example - where to vote, ecc.

Eril Lambert, from Eurovisioni in Rome, express appreciation for the role attributed by the European Union to civil society in the mechanisms to fight disinformation and raised several questions to the representatives of the EU and of the platforms. In response to different questions on line and in the room, it was precised that the voluntary Code of Conduct was only one tool to demonstrate compliance with European rules. The objective is to bring disinformation into light, through transparency – the Commission often launches investigations, and the DSA has now added an auditing layer to the instruments at its disposal. Take downs by platforms with their motivation and eventual appeal; have to be sent to a Commission database.

In case of non-respect of the rules, the Commission has several means available such as warning letters and imposing (large) fines up to 6% of the global turnover.

It was also indicated that what is important to improve collaboration between platforms, authorities, and institutions such as EDMO, e.g. to facilitate access to platform data on behalf of researchers.

Transparency of recommending algorithms systems is also an issue. TikTok for example allows the user to reset its recommendations to avoid to remain locked in a filter bubble, or to refuse a personalized feed.

The conclusion was that an efficient fight against disinformation requires a framework for a large cooperative between the different stake-holders, a continuous monitoring of the phenomena, and rules for transparency in the different processes.

A “big stick” against those who don’t want to play along the rules is also very useful.

IGF 2023 Networking Session #172 Networking for Information Integrity in Asia and Globally

Updated: Thu, 26/10/2023 - 23:43
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

The process of negotiating internet governance issues is opaque and confusing to ordinary people, particularly in less developed, global majority contexts. There needs to be a multistakeholder approach (public sector, private sector, media, academia, civil society, tech companies) to address internet governance specifically focusing on information integrity issues.

,

Civil society engagement with the private sector has gotten more difficult as tech companies disinvest in trust and safety teams, certain platforms such as TikTok have become more responsive such as to physical threats of violence or violent images, while others such as X have been challenging to engage.

Calls to Action

All stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

,

Civil society should form regional networks so that similar closing contexts can share resources and strategies. Through networks, CSOs should look to share information to get a more holistic view of current data sets, engagement experiences, and historical data around closing societies and other contexts.

Session Report

Major themes:

This session brought together stakeholders from civil society across Asia and Globally to discuss the challenges facing CSOs when trying to build a resilient information space, especially in closed or closing societies. NDI discussed its Info/tegrity network and other means of connecting with groups across civil society to develop capacity to address information integrity issues and contribute to internet governance discussions. Experts from Pakistan and Taiwan shared the challenges associated with engaging social media platforms to gather data for critical research, support an open, democratic and free information environment during elections, and escalate cases of online harassment and abuse. The session then split into four break-out groups to share both existing challenges and potential solutions across the major themes on this issue.

Group 1: Challenges of working online in closed societies

  • This group discussed the feasibility of creating a global network of CSOs for groups or individuals working in closed societies. They agreed that while a network of support is an important component of successfully navigating a closed space as a CSO, regional-level networks make more sense than global networks. Closed societies face unique challenges within their larger classification and allowing convergence at the regional level would allow groups to take a narrower, deeper approach to networking than a broad, shallow global network would achieve. They cited current work in Asia around protecting journalists in closed societies as an existing model of their proposal.

Group 2: Social media data access for research

  • This group discussed current methods of monitoring social media platform information and what resources would make their work easier. They focused on ways CSOs can support each other’s work in addition to talking about recent API changes that have made research more difficult. 
  • They highlighted that to continue the important work of researching the information landscape using social media data, they recommend that CSOs build regional networks to share their experiences across similar contexts and share their current data sets and historical data sets to bolster the total amount of data and enrich everyone’s data sources. 

Group 3: Coordination with technology platforms around trust and safety concerns

  • This group discussed the varying roles specific social media platforms play across Asia and the World. They also emphasized that platforms’ gutting of trust and safety teams across the boards has resulted in a delay or lack of response when online harm is reported and an uptick in attacks on activists and human rights defenders.
  • Their main point was that while programs like Meta’s Trusted Partner Program are effective in providing an escalation path, it is not equitable and relies on personal relationships or individual tech platform employees prioritizing trust and safety. A system fix is needed, especially with the 2024 elections around the corner. The recommendation from this group is that all stakeholders should work with and pressure private sector technology companies to have clear and robust escalation paths that are not based on personal relationships or single employees committing to action.

Group 4: Internet governance for information integrity

  • This group recommended several strategies to improve coordination at the global level around local, national, and/or regional Internet governance and policy best practices. These include adopting a multistakeholder (public sector, private sector, media, academia, civil society, tech companies) approach to Internet governance to make the process more accessible, prioritizing tools that enable access for people with disabilities and other marginalized groups, and developing regional and local strategies for Internet governance as well as a global perspective.
  • They also suggested that a human rights approach can be incorporated into technology platform policy by applying the multistakeholder framework to implement better interaction, information sharing and policies with the private sector. This would have impacts such as more robust privacy and data protection procedures, simplifying the language that platforms use to communicate their policies (including expanding available languages), and creating quantifiable measures for tracking online harms.
IGF 2023 Lightning Talk #37 Open Data Evaluation Model in Brazilian Governmental Portals

Updated: Thu, 26/10/2023 - 23:30
Data Governance & Trust
Key Takeaways:

Takeaway 2: Brazil has begun implementing such tool

,

Takeaway 1: Tools for automated evaluation of open data portals and open data best practices can help to improve open data quality

Calls to Action

Call to action 2: The Civil Society that is involved with open data should become aware of the existence and workings of such evaluation tools

,

Call to action 1: Governments around the world should follow Brazil's example and implement evaluation models.

Session Report

Report on Lightning Talk #37: "Open Data Evaluation Model in Brazilian Governmental Portals" 

Introduction

The lightning talk "Open Data Evaluation Model in Brazilian Governmental Portals" was presented at the Internet Governance Forum, shedding light on the critical issue of data standardization and the efforts made by the Brazilian Network Information Center (NIC.br) to address this challenge. The talk emphasized the importance of open data quality, presented an automated evaluation model under development for the Brazilian Open Data Governmental portals, and issued two key takeaways and call-to-action messages.

Key Takeaway Messages

The presentation by the speaker highlighted two primary takeaway messages:

1. Tools for Automated Evaluation of Open Data Portals Enhance Data Quality

The first crucial takeaway from the talk was the significance of tools for automated evaluation in enhancing the quality of open data. Open data portals often need more standardized information structures, an improvement that impacts efficient data access and utilization. The speaker stressed the need for standardized principles and best practices for publishing open data. Tools designed to evaluate open data portals and ensure adherence to these principles can play a vital role in improving the overall quality of open data.

2. Brazil's Implementation of Evaluation Tools

The second takeaway message revealed that Brazil has initiated the implementation of such tools for evaluating and improving open data quality. The Brazilian government has recognized the importance of standardization and best practices in data publication and is taking proactive steps to address these issues.

Call-to-Action Messages

The talk concluded with two call-to-action messages aimed at governments and civil society:

1. Governments Worldwide Should Emulate Brazil's Example

The first call to action implores governments across the globe to follow Brazil's lead and implement open data evaluation models. Given the benefits of standardization and best practices in data publication, the speaker urges governments to prioritize developing and deploying tools for automated evaluation in their own open data initiatives. This step would improve data governance and lead to more efficient data sharing and utilization.

2. Raise Awareness among Civil Society

The second call to action aims at civil society organizations and advocates involved in open data. It encourages these stakeholders to become aware of the existence and workings of open data evaluation tools. By increasing awareness and understanding of these tools, civil society can actively participate in the process, supporting the implementation of standardized data practices and advocating for open data quality in their respective regions.

Conclusion

The lightning talk on "Open Data Evaluation Model in Brazilian Governmental Portals" at the Internet Governance Forum highlighted the critical need for standardized data publication practices and the role of automated evaluation tools in achieving this goal. The Brazilian Network Information Center's proactive efforts in implementing such tools serve as an inspiring example for other nations. The call-to-action messages emphasize the importance of global adoption and civil society involvement in furthering the cause of open data quality and standardization.

In an age where data drives innovation and policy decisions, standardization and evaluation tools ensure that open data fulfills its potential as a valuable resource for governments, organizations, and individuals worldwide. The lessons from this talk must be acknowledged and acted upon, setting a higher standard for open data globally.

IGF 2023 Open Forum #58 Child online safety: Industry engagement and regulation

Updated: Thu, 26/10/2023 - 23:12
Cybersecurity, Cybercrime & Online Safety
Key Takeaways:

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.

,

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Calls to Action

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.

,

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Session Report

IGF 2023 Open Forum #58: Child online safety – Industry engagement and regulation


Key Takeaways


1.    

Online child sexual exploitation is a grave violation of human and child rights. Threats are continuously escalating and changing.


2.    

Self-regulatory measures are broadly perceived as inadequate. Significant regulatory and cultural changes are on the horizon, demanding greater responsibility and action from businesses.

Call to Action


1.    

Governments and companies must remain vigilant and responsive to the ever-evolving threat landscape. Continued exchange of learning and experience in collaborative and co-regulatory models across different jurisdictions is necessary.


2.    

Companies should embed online child sexual abuse and exploitation into broader human rights due diligence, including impact assessments.

Context

This hybrid session facilitated in-person by Ms Afrooz Kaviani Johnson – and online by Ms Josianne Galea Baronexplored different models of industry engagement and regulation to address online child sexual abuse and exploitation (CSEA). 

Panel discussion

Ms Julie Inman Grant, eSafety Commissioner, Australia, discussed the suite of regulatory tools her office uses to combat online CSEA. Key among Australia’s tools is its complaints schemes, which facilitate the removal of harmful content to prevent re-traumatization and allow trend analysis to influence systemic change. Additionally, the Basic Online Safety Expectations, which detail the steps that social media and other online service providers must take to keep Australians safe, enable the Commissioner to demand transparency, complete with penalties. Australia’s tools also include mandatory codes for various sections of the online industry in relation to illegal and restricted content, including CSAM. The Commissioner emphasized that even the largest companies are not doing enough and stressed the need for global pressure on companies to enhance safety measures. ‘Safety by Design’ was highlighted as a fundamental systemic initiative to support industry to better protect and safeguard citizens online.

Mr Tatsuya Suzuki, Director, Child Safety Division of the Children and Families Agency, Japan, presented how the newly formed Children and Families Agency is working with the private sector to combat online CSEA. The national framework acknowledges the essential role of private sector voluntary actions to ensure children’s safety online. It respects the balance between eradicating harmful content and ensuring freedom of expression. The Agency’s strategies, detailed in the 2022 National Plan for the Prevention of Sex Crimes against Children, involve public-private collaborations. The Plan for Measures Concerning Child Sexual Exploitation 2022 outlines these government-led actions. In July 2023, a prevention package was presented to the Cabinet Office, emphasizing joint efforts with relevant ministries to address child exploitation. 

Mr Toshiaki Tateishi, Japan Internet Provider Association/ Internet Contents Safety Association, discussed Japan’s private sector initiatives against online CSEA. The Internet Content Safety Association (ICSA) compiles a list of websites known for child abuse material based on data from the National Police Agency and the Internet Hotline Centre. An independent committee reviews this data, and upon confirmation, the ICSA distributes a blocking list to ISPs and mobile network operators, preventing access to these sites. The Safer Internet Association (SIA) contributes by operating a hotline for reporting illegal content, conducting research, advising on policy, and leading educational initiatives. These associations coordinate with providers, both domestic and international, to reduce and remove illegal and harmful content.

Dr Albert Antwi-Boasiako, Director-General, Cyber Security Authority Republic of Ghana, emphasized Ghana’s approach to championing industry responsibility and innovation. Recognizing that self-regulation is insufficient, Ghana advocates for ‘collaborative regulation’ rather than traditional top-down mandates. This strategy acknowledges that companies often overlook the risks children face online. Ghana’s Cybersecurity Act mandates industry action to protect children, encompassing content blocking, removal, and filtering. This law requires further specification through a legislative instrument, which is currently being crafted in consultation with the private sector and civil society. The Act includes administrative and criminal penalties, crucial for enforcement in developing nations, and allows for fines to fund the regulatory institutions. Dr Antwi-Boasiako noted that success hinges on widespread awareness and understanding of the issues at stake.  

Mr Dunstan Allison-Hope, Vice President, Human Rights, BSR (Business for Social Responsibility) highlighted the critical role of human rights due diligence (HRDD), including impact assessments, in combating online CSEA. HRDD based on the UN Guiding Principles on Business and Human Rights (UNGPs) can form a key part of a company’s obligations to address online CSEA. The benefits of this approach include a comprehensive review of human rights impacts, special attention to vulnerable groups like children, and a structured framework for action, tailored to each company’s position in the technology stack. With regulations now echoing the UNGPs, voluntary measures are shifting to mandatory. He urged companies to embed children’s rights into their broader HRDD processes. While this significant regulatory change is especially prominent in Europe, he encouraged companies to take a global approach to achieve the desired child rights outcomes.

Interactive discussion

The discussion started on balancing children’s right to protection with their right to access information, especially age-appropriate and accurate sexual and reproductive health information. The conversation took cues from the UN Committee on the Rights of the Child, General comment No. 25 (2021). Although the internet was not built for children, they are significant users, leading to a call for both minimizing harm and amplifying benefits. Australia’s consultations on approaches to age-assurance spotlighted this need, pushing companies to look beyond age-gating. A human rights-based approach was emphasized to navigate tensions between human rights. Strategies like DNS blocking alone were deemed inadequate, emphasizing holistic approaches, like Australia’s ‘3Ps’ model of Prevention, Protection, and Proactive, systemic change, are crucial. One significant challenge lies in raising awareness and promoting help-seeking behaviours among children and young people.

Conclusion

Both regulators and companies, along with civil society, are currently navigating extremely challenging dilemmas. Whether through regulation, self-regulation, or ‘collaborative regulation’, there is a significant shift happening in the regulatory landscape. This shift presents an opportunity to firmly integrate the issue of online CSEA into these evolving processes.

Further resources

United Nations Children’s Fund (2022) ‘Legislating for the digital age: Global guide on improving legislative frameworks to protect children from online sexual exploitation and abuse’ UNICEF, New York.

 

IGF 2023 YCIG Advancing Youth Participation in IG: results from case study

Updated: Thu, 26/10/2023 - 22:38
Key Takeaways:

Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

,

Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

Calls to Action

The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

,

Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

Session Report

The session captured a discussion related to Internet Governance Forums (IGFs) and youth participation, specifically in different regions like Africa and Latin America. Following are some insights and takeaways:

1. Diverse Regional Perspectives: The session presented various regional perspectives, from Latin America to Africa, on the state of youth engagement in Internet Governance.

2. Growing Youth Engagement: The conversation underscored a growing trend where young people are becoming increasingly involved in these discussions. African governments, in particular, are beginning to engage more with the youth, but there is a call for deeper involvement beyond just Day 0 events.

3. Collaborative Efforts: Collaboration and partnerships seem to be key themes. The discussion highlights collaborative efforts across various groups, such as the Internet Society Youth Standing Group and the Youth Coalition on Internet Governance.

4. Case Studies: Various case studies from different regions, such as Latin America and Africa, were discussed to illustrate the state of youth engagement in these areas. For example, how Youth IGF operates differently across various regions due to cultural, logistical, and governmental factors.

5. Challenges and Solutions: Challenges such as the need for a common reporting tool and the disparity between youth discussions and main session topics were brought up. Solutions like creating a common platform for reporting were suggested.

6. Youth-Led Initiatives: There are emerging youth-led IGF initiatives, such as the Youth IGF in Ethiopia. These initiatives highlight the growing momentum and importance of youth voices in Internet Governance discussions.

7. The Role of Youth: While youth are at the decision table, there is a need to move beyond this and consider them as co-collaborators and co-creators in Internet governance discussions.

8. Value of Inclusivity: The discussion also emphasized the importance of not just engaging youth who are already part of the community, but also newcomers and the benefits of involving a wider and more diverse youth population in shaping these sessions and discussions.

In summary, the session provided a glimpse into the dynamic and evolving role of youth in Internet Governance across different regions. There's a clear call for deeper youth involvement, collaborative efforts, and the creation of systems that ensure their voices are effectively incorporated into broader discussions and decisions.

IGF 2023 DC-BAS A Maturity Model to Support Trust in Blockchain Solutions

Updated: Thu, 26/10/2023 - 22:37
AI & Emerging Technologies
Key Takeaways:
Benefits of a maturity model: Provides a common framework for assessing blockchains. Support trust in blockchain solutions. Helps organizations identify areas for improvement. Facilitates communication and collaboration between stakeholders. Use cases: Digital identity Banking & Finance Digital Assets Voting/elections Legal Supply chain Other (e.g., healthcare, education Interest from parliamentarians, non-governmental organizations, academia, Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria: Ensure that blockchain solutions meet the needs of all stakeholders. Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases. Promote the adoption of best practices in blockchain design and implementation. Rationale for using a maturity model
Calls to Action

Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology. Share lessons learned and best practices. Involve key stakeholders and interested new parties in the: Periodic meetings of the IGF-BAS Collection of input/requirements/suggestions from representatives of multi-stakeholder groups. Develop and validate sector-specific supplements. Simulate the assessment

Session Report

Dynamic Coalition on Blockchain Assurance and Standardization

 

Sessional Report: The IGF-Blockchain Assurance & Standardization, Panel Discussion on “A Maturity Model to Support Trust in Blockchain Solutions”.

Date of Session: 18 October 2023

Kyoto Conference Center, Room: WS 10 – Room I

Online Link: https://intgovforum.zoom.us/meeting/register/tJEucuihrT4pE9VXFZ6GWP2gQNOjl19VqgLQ

 

Introduction

The Dynamic Coalition on Blockchain Assurance and Standardization (IGF-DC-BAS), was established to connect, communicate, and collaborate with government leaders and stakeholders to use blockchain technology to improve public services.

More specifically, with the support of the Government Blockchain Association (GBA), the IGF-DC-BAS established a working group for International Organizations & Standards, supporting the UN-SG Global Digital Compact goals, including:

  • Ensure that everyone has access to the digital world.
  • Promote the use of digital technologies to achieve the Sustainable Development Goals.
  • Protect human rights and fundamental freedoms in the digital age.
  • Build trust in the digital world.

Outcome of the Session

Takeaways:

  • Benefits of a maturity model:
    • Provides a common framework for assessing blockchains.
    • Support trust in blockchain solutions.
    • Helps organizations identify areas for improvement.
    • Facilitates communication and collaboration between stakeholders.

 

  • Use cases:
    • Digital identity
    • Banking & Finance
    • Digital Assets
    • Voting/elections
    • Legal
    • Supply chain
    • Other (e.g., healthcare, education)

 

  • Interest from parliamentarians, non-governmental organizations, and academia:
    • Demonstrate the growing awareness on the importance of blockchain assessments.
    • Create opportunities for collaboration and knowledge sharing.

 

  • Reported benefits from those who have conducted assessments of blockchain solutions based on a set of shared criteria:
    • Ensure that blockchain solutions meet the needs of all stakeholders.
    • Reduce the risk of selecting inappropriate or inadequate blockchain solutions for their specific use cases.
    • Promote the adoption of best practices in blockchain design and implementation.

 

  • Rationale for using a maturity model:
    • A maturity model provides a structured, objective, repeatable, and technologically agnostic approach to assess blockchain solutions.
    • It helps organizations identify their current state of maturity and track their progress over time.
    • It can be used to benchmark blockchain solutions.

 

Plan of action:

  • Provide more details about opportunities for training and awareness on the Blockchain Maturity Model and the corresponding assessment methodology.
  • Share lessons learned and best practices.
  • Involve key stakeholders and interested new parties in the:
    • Periodic meetings of the IGF-BAS
    • Collection of input/requirements/suggestions from representatives of multi-stakeholder groups.
    • Develop and validate sector-specific supplements.
    • Simulate the assessments of blockchains.

 Additional Activities of the IGF-DC-BAS

In addition to the DC Session, representatives of the IGF-DC-BAS participated in the “Free and Fair Voting Panel”, “Blockchain Assurance Panel”, “Internet for All Panel”, and “Blockchain in Healthcare Panel”.

 

During the 4 days of the conference, the IGF-DC-BAS Team held 24 individual meetings with Government Officials (Parliamentarians form Uganda, Kenya, and Ghana), and representatives from media (Bloomberg), law firms, private sector, and educational institutions.  

 

The topics discussed included newly available functionalities in scalability of networks, secure identification, CBDC, voting, software supply chain security and general governance using zero knowledge, AI and blockchain technology.

 

 

IGF 2023 DC-Blockchain Implementation of the DAO Model Law:Challenges & Way Forward

Updated: Thu, 26/10/2023 - 22:20
Key Takeaways:

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fidiciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

,

When looking at the development of legal frameworks in relation to DAOs some of the prcedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

Calls to Action

There needs to be greater sensitization and discourse taking place between DAO practicioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democrarcy by instutions such as the IGF and this should be definitely explored further.

Session Report

 

Session Report IGF 2023 DC-Blockchain Implementation of the DAO Model Law: Challenges & Way Forward


  1. Session Details

The DAO Model Law is a multistakeholder effort led by COALA (Coalition of Legal Automated Applications) to provide legal certainty for so-called ‘unregistered’ DAOs (i.e., DAOs that are not wrapped in a legal entity form) and their participants, and unlike other regulatory frameworks, accommodate flexibility for their unique features and further innovation. Since its development the Model Law has served as a precedential source in the development of legislation such as the DAO Acts in both Utah and New Hampshire, parliamentary discussions in Australia, and has also been referenced in the recent call for evidence by the UK Law Commission. The session seeks to take the discussion further from the session hosted at IGF 2022, to analyse how different legislators and policy makers are approaching the development of legal frameworks to govern DAOs and also outline lessons learnt as well as recommendations for the way forward as more jurisdictions express interest in regulating unregistered DAOs. The session will have great benefit for policy makers, governmental representatives, law makers, practitioners as well as DAOs in navigating the course of granting legal recognition and certainty and will address the critical aspects of inter alia governance, functional and regulatory equivalence, liability attribution and taxation of unregistered DAOs.

It is intended that the workshop will be conducted in hybrid format to accommodate onsite participation at IGF 2023 as well as online attendees within various jurisdictions who wish to contribute to the discussion on the implementation on the DAO Model Law. In this regard it is anticipated that the official IGF Online meeting platform will be utilized, and online participants will be able to post comments and also ask questions in relation to the content of the discussion.

  1.  Panel Discussion

The Presentation made during the Panel Discussion and ensuing conversation centred around why is there a necessity to develop a DAO Model Law, the inherent advantages of DAOs, the primary principles of the DAO Model Law (viz. functional and regulatory equivalence) as well as the outline of the fundamental sections of the DAO Model Law.

The discussion then focussed on what are the next steps and the progression being made by various jurisdictions towards the implementation of regulatory frameworks. This therefore involved taking a close look at the jurisdictions that have instituted incorporation options such as Wyoming, Vermont as well as the Marshall Islands as well as countries where the Model Law have been considered/reviewed/(partially)transposed, such as Australia (Bragg report, Senate of Australia), United Kingdom (UK Law Commission DAO Consultations), St. Helena, New Hampshire and Utah.

During the session the Panel then focussed on what are some of the challenges faced in garnering adoption by countries which centred around the key sensitive issues of regulatory equivalence, privacy rights (incl. privacy of remuneration) recognised by law as well as taxation.

  1. Next Steps/Way Ahead

It as identified that there is further work that can be undertaken to refine the DAO Model Law, based on developments within the global sphere. As such new taskforces will be convened to work on the key areas of Identity and Limited Liability, Privacy/Transparency, Taxation as well as Technical Guarantees for Functional & Regulatory Equivalence and Updates.

  1. Key Session Takeaways

DAOs are a global technology that knows no borders and there should not be a rush to regulating this technology and stifling its growth. We are also starting to see the emergence of case law in relation to key issues such as liability, tort, fiduciary duties etc. What would be needed s the use of sandboxing to allow them to grow and deliver on its promise.

When looking at the development of legal frameworks in relation to DAOs some of the procedural requirements already fits under existing legislation. For certain issues there may not be the need to develop de novo frameworks, but to address the key issues such as the impact of joint and several liability on DAOs which can stifle their development.

There needs to be greater sensitization and discourse taking place between DAO practitioners and governmental policy and law makers in order to remove misapprehensions about DAOs and also clarify how the technology works and its benefits. In addition to which, DAOs could and should be used as an effective tool in promoting global participatory democracy by institutions such as the IGF and this should be definitely explored further.

---oOo---

IGF 2023 Lightning Talk #122 AI in the courts an opportunity for economic proceedings?

Updated: Thu, 26/10/2023 - 22:13
AI & Emerging Technologies
Key Takeaways:
  • The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.
  • ,
  • The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.
  • Calls to Action
  • We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.
  • ,
  • The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.
  • Session Report

    The panel discussion titled "AI in the courts an opportunity for economic proceedings?" brought together industry experts who explored the implications, advantages, and challenges of integrating Artificial Intelligence (AI) into the judiciary. The session was moderated by Rafał Wieczerzak.

    Panelists and their Key Points:

    In her remarks, Anna Pietruszka primarily focused on how artificial intelligence can impact the efficiency of court proceedings, especially from a business perspective. She pointed out that introducing AI-based tools for straightforward, routine matters, such as making minor changes in business registers, could significantly speed up and simplify procedures. Anna also emphasized the need for modernizing communication within the judiciary. She suggested that while courts are an integral part of our system, their current communication methods are not aligned with modern realities. In her view, technologies like artificial intelligence can play a pivotal role in transforming these mechanisms to be more accessible and understandable to today's society.

    Gabriela Bar and Robert Sowiński highlighted the complexity of introducing AI into the judicial system. Gabriela focused on the ethical aspects of implementing AI. She underscored that trust in the system is crucial and that people need to believe that the technology is used fairly and transparently. Therefore, as she suggested, the optimal model would be Explainable Artificial Intelligence (XAI), which would be able to provide people with a logical justification for its decisions. Robert, on the other hand, cited the example of the Chinese judicial system where AI is already in use and pointed to the successes in the realm of alternative dispute resolution in the UK. However, he noted that this technology is not without risks, and we need to be aware of the potential consequences of its misuse.

    From a judge's perspective, Konrad Wasik shared his unique insights into the impact of artificial intelligence on the judiciary. He expressed concern over the burden of numerous administrative tasks that divert judges from their primary duty of adjudicating. In his opinion, artificial intelligence could significantly alleviate courts from these routine tasks, allowing them to concentrate on more complex cases that require human judgment. Konrad also identified potential areas of AI application, suggesting that its integration into the judiciary holds immense potential, as long as it's introduced with due caution and an understanding of its limitations.Post-panel Activities:

    The session was not just an opportunity to gain insights from the panelists but also a platform for attendees to ask questions. The face-to-face interaction allowed for lively debates and provided a chance for legal professionals from various countries and continents to network, exchange experiences, and establish valuable contacts.

    Conclusion:

    The panel successfully addressed the multidimensional aspects of integrating AI into the judiciary, from efficiency and modernization to ethical considerations. The consensus was that while AI offers great potential, its implementation needs to be done thoughtfully, ethically, and in a phased manner.

    The panel concluded with the following recommendations and recommendations:

    The implementation of AI in the judiciary is a universal and global issue. The differences between legal systems remain in the background. We should develop postulates and international legal and ethical standards for the use of AI in the judiciary.

    The use of AI in alternative dispute resolution will be of great benefit to business. Being aware of the chances of winning a dispute and therefore receiving a predicted outcome and/or an assessment of the strength of a party's arguments and position from AI will reduce the burden on the courts. We should use AI to issue non-binding resolutions that will guide a party whether to take the case to court or, for example, to settle.

    The implementation of AI in the courts should be progressive, in the first step we should start by using AI to perform routine, repetitive and time-consuming activities. As a second step, it would be good to implement solutions based on hybrid intelligence.While implementing the AI driven solutions we have to review carefully every activity that is processed in the court and analyze what can be replace in a first place.

    We expect from the local governance to support jurisdiction to fulfill the tech gap between the business needs and justice. We should aspire to cooperation between business and public authorities, but at the same time create clear and transparent rules for such cooperation. We must be aware of the temptation of private entities gaining access to citizens' data and attempting to manipulate court rulings using AI systems.

     

    IGF 2023 WS #279 Sandboxes for Data Governance: Global Responsible Innovation

    Updated: Thu, 26/10/2023 - 21:54
    Global Digital Governance & Cooperation
    Key Takeaways:

    No sandbox will be the same and depending on who you ask the definition of a sandbox is different. This shouldn’t alarm stakeholders but rather fuel openness and enable sandboxes to be used as an anchor for policy prototyping

    ,

    Sandboxing is a spirit and can help actors share and understand a problem. This can clarify policy challenges or new tech applications and how to develop user safeguards.

    Calls to Action

    Regulators need to listen different point of views. Building an effective sandbox is less about the skills and maturity of a regulatory but rather about regulators being allowed to engage purposefully with stakeholders.

    ,

    More experimentation and sharing of experiences need to be done in order to help unpack the opportunities and challenges of setting up sandboxes for data in a particular sector or regulatory environment.

    Session Report

    Mr. Axel Klaphake, GIZ Director, Economic and Social Development, Digitalisation, opened the panel by briefly introducing the topic, emphasizing the benefits of data for economic growth and social development, and then introducing the speakers present at the table as well as those who would be attending online. 

    The on-site moderator, Armando Guio, then gave a presentation on the current state of regulatory sandboxes to offer context for the upcoming conversation. He defined the regulatory sandbox as "a regulatory approach, typically summarized in writing and published, that allows live, time-bound testing of innovations under a regulator's oversight. Novel financial products, technologies, and business models can be tested under a set of rules, supervision requirements, and appropriate safeguards." This concept was attributed to the U.N. Secretary-General's for special advocate for inclusive finance for development. Mr. Guio also dealt with examples of uses, such as Brazil, Colombia, Ethiopia, Germany, Kenya, and Lithuania. 

    As the first panelist's speech, a video from ANPD, the Brazilian Data Protection Authority which co-organized the panel, was broadcasted, in which Thiago Moraes emphasized the importance of fostering a dynamic discussion among all relevant stakeholders in order to deliberate strategies that can pave the way for the development of Sandbox's initiatives. He also announced the beginning of the call for contributions for ANPD’s regulatory sandbox on AI and data protection, which is a crucial step forward in Brazil's journey toward responsible innovation. 

    Agne Vaiciukeviciute, Vice Minister of Transport and Communication of the Republic of Lithuania, highlighted her country's experience with regulatory sandboxes. The outcome has been considered a success, and this has generated more interest and investments in this area. They are currently exploring 5G technology and its capabilities in depth. 

    Denise Wong, from the Singapore Data Protection Authority, IMDA, highlighted their experience and spoke about unlocking the potential of data through policy mechanisms in collaboration with industry as a method to support it and also assist them discover suitable safeguards and protections. She cited one of the key advantages of employing sandboxes as the ability to reduce the time and effort required for technologies to be deployed, allowing enterprises to securely experiment with cutting-edge technologies that provide them a competitive advantage, among further benefits. 

    Lorrayne Porciuncula, from the DataSphere Initiative, addressed the fact that the aspects required for governments to follow in order to successfully establish a regulatory sandbox vary depending on the national jurisdiction in which it is located, the institutional framework, and the time frame, among other factors. Therefore, it is important to demystify what sandboxes are and to show that they are not for exclusive application of sophisticated regulators. In fact, it is a way of engaging purposefully with stakeholders from the design phase onward and building institutional trust with the private sector. 

    Kari Laumann, from the Norwegian DPA, presented the benefits of using sandboxes in her country. She listed as a good practice the experience of bringing firms into the dialogue prior to the installation of the sandbox with questions about what they were interested to build when it comes to AI and data protection, algorithm fairness, and minimizing data. 

    Ololade Shyllon, from Meta, shared the private sector's perspective, saying that while the benefits of using sandboxes vary depending on the unique context of each project, in general, they help to reduce regulatory uncertainty, create a safe space for innovation, make adaptation faster, and build trust between regulators and the private sector. 

    The panel then proceeded with an online and in-person Q&A session. 

    Overall, the session brought out the following takeaways: 

    • It is critical to establish objective criteria and clear advantages for participants, such as certifications. Set highly specific use-case objectives as well. 

    • The sandbox is vital for mapping common problems that the public and the private sector would face when developing or deploying a technology. 

    • Bringing many stakeholders into the conversation can help to reduce regulatory capture. 

    • The resources needed to implement sandbox may vary according to its goals and the skills and maturity of the regulator. 

    • Sharing experiences between countries is a great approach to learn about the many models available. 

    • Sandboxes can promote responsible data governance and AI innovation, creating a space where innovative ideas can flourish while respecting human rights, such as privacy and data protection. 

    IGF 2023 Networking Session #168 Advancing Open Science Globally: Challenges and Opportunitie

    Updated: Thu, 26/10/2023 - 21:11
    Data Governance & Trust
    Key Takeaways:
    During the discussion, two distinct perspectives on open science emerged. One emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it. The second perspective highlighted the importance of broadening access to scientific discoveries and derived products, and of involving a broader range of individuals in defining scientific processes.,

    It's essential to outline specific actions that can drive progress toward these goals, and the appropriate actions vary depending on which perspective is adopted.

    Calls to Action

    To maximize the value derived from scientific research, there should be a concerted effort by the private sector to standardize data related to scientific research and make this data widely available on the internet.

    ,

    To enhance accessibility to scientific results and resources and enhance their social impact, it is crucial that government reconsider existing intellectual property and patent models.

    Session Report

    Report on the Networking Session #168: "Advancing Open Science Globally: Challenges and Opportunities"

    The session was fascinating as it contrasted two different perspectives on the goals and paths of Open Science. While researchers and advocates from Latin America highlighted the importance of involving a broader range of individuals in the governance of science and of broadening free and open access to scientific discoveries and derived products in order to maximize its social impact, participants from the private sector and the global north emphasized the need to enhance the organization and standardization of scientific production, aiming at maximizing the value that can be derived from it.

    Henrique Xavier highlighted the persistent issue of paywalls to scientific publications. Moreover, while government and academic data are often open, data from private companies in areas like social media and artificial intelligence remain closed. Opening such data sources is essential for research on misinformation and AI governance, both discussed at the Internet Governance Forum.

    Sarita Albagli reinforced that paywalls hinder access to knowledge, particularly in the global south. She highlighted that Open Science is not only a more cost-effective model than closed science but also addresses the issue of knowledge access, preventing the loss of valuable resources. As a concrete example of a successful program, she mentioned the Brazilian bibliographic database SciELO.

    She raised the requirement for Open Science to address citizens' needs and the importance of involving citizens in research about issues that affect them. She also mentioned the risk of Open Washing, where companies direct Open Science to practices that allow them to profit, which could disproportionately affect the global south by making its research subordinated to private foreign interests.

    Carolina Botero emphasized that Open Science should grant access to publications and the knowledge generated by scientific research, such as vaccines during the pandemic. Rethinking patent laws is crucial to achieving this. Carolina emphasized the importance of addressing power imbalances, ensuring that all countries can utilize data for research purposes by adjusting legal frameworks to support global access.

    Kazuhiro Hayashi emphasized that Open Science goes beyond Open Access. It encompasses providing access to both data and research methods. He stressed the importance of international cooperation in making this data and knowledge accessible to everyone. He said Japan was implementing Open Access and Open Data policies for publicly funded research.

    Vint Cerf (present in the audience) mentioned Google Scholar and Schema.org as tools that help organize and standardize scientific knowledge. He raised the need to document experiment designs and the challenge of accessing old data, methods, and analyses after computer systems evolved. He questioned who should fund Open Science infrastructure and suggested we design a viable business model that could encourage companies to invest in these initiatives.

    Vint Cerf highlighted the importance of creating a document stating the desirable properties of an Open Science ecosystem. He suggested creating a vast database to ease data processing and analysis. Cerf emphasized the importance of its interoperability so the database could migrate in case of a lack of support from the host institution. He recommended organizations such as UNESCO and the International Science Council as potential allies in advancing Open Science.

    Two practical conclusions surfaced from the discussion. In order to maximize the value derived from scientific research, there should be a concerted effort by the global community, including the private sector, to standardize data and metadata related to scientific research and make this data widely available on the internet. To enhance accessibility to scientific results and resources and enhance their social impact, governments must reconsider existing intellectual property, copyright, and patent models.

    IGF 2023 Town Hall #170 Multistakeholder platform regulation and the Global South

    Updated: Thu, 26/10/2023 - 20:49
    Global Digital Governance & Cooperation
    Key Takeaways:

    Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was highlighted that it needs improvements to guarantee meaningful participation from all stakeholders, especially within the civil society and technical community that many times have difficulties in participating in national or international forums, due to lack of resources and time.

    Calls to Action

    Guarantee more resources to civil society and technical community to increase participation in international governance forums Adopt bottoms-up regulation, specially in technical standards, such as AI, ensuring global south countries participation, involving the technical community and private sector in rule formulation. Private sector to ensure openness and access to data in order to ensure meaningful participation from other sectors

    Session Report

    Organized by the Brazilian Internet Steering Committee (CGI.br), the Town Hall focused on delving into different digital platform regulation governance models through the exchange of global south countries’ practices and discuss the role of State and non-State stakeholders vis a vis the value of the Internet Governance multistakeholder model. The session was moderated by Henrique Faulhaber, counselor of the Brazilian Internet Steering Committee, representative of the private sector, who opened the session by exposing the role of multistakeholderism in Brazil Internet Governance, as the role it may have on platform regulation, highlighting the particularities of regulation and institutional difficulties that may occur in global south countries. 

    Marielza Oliveira, from Unesco, presented a more general approach to the multistakeholderism model, highlighting its importance to build consensus evolving multiple stakeholders, however the model must overcome challenges to be inclusive, diverse and human rights based as well as to account power imbalances from big techs.

    Sunil Abraham, from Facebook India, on the other hand, highlighted the importance of coordinating all the forms of regulation – from the estate, co-regulation and self regulation with standards setting organizations. This could be seized in platform regulation by  giving room to bottom up knowledge and norm settling, especially with global south participation in a way that would ensure future-proof regulation. 

    Miriam Wimmer, director from the brazilian DPA, also agreed on the importance of coregulation, highlighting the complex institucional set in Brazil with the difficulties in defining the regulation scope and which authorities would be evolved in a broad theme such is platform regulation. The director also emphasized that multistakeholder isn’t incompatible with multilateralism. 

    Joanne D Cunha, researcher from the Centre For Communication Governance at NLU Delhi, pointed out the challenges for global south countries in platform regulations and participating in global forums and international processes, especially due to difficulties with resources. 

    At last, Renata Ávila from Open Knowledge Foundation stressed out the inequalities between different realities, in particular considering small global south countries that may lack not only platform regulation laws but also data protection laws. She also highlighted the importance of platforms not taking advantage of that situation, ensuring transparency and a general frame to be replicated. 

    The Q&A session stressed out the arrangements between the different regulation models that may be applied to platform regulation, and the challenges in cooperation between multiple authorities. It was also pointed out how platforms with transnational reach keep track of many jurisdictions and may replicate new mechanisms to different countries. At last, the speakers highlighted the importance of south-south cooperation, holding platforms accountable and an expanded multistakeholder model with more diverse participation. 

    We can highlight two key takeaways. Multistakeholderism is still largely considered the best way to construct consensus, ensuring results that encompass different stakeholders. However, it was pointed out that it needs improvements to guarantee meaningful participation from all stakeholders, especially within civil society and technical community that many times have difficulties in participating in national or international forums, due to, among other reasons, lack of resources and time. Therefore, governance of platform regulation needs to consider the differences of institutional arrangements and the necessity to equalize the power imbalances that large platforms may cause.  

    Call to actions mentioned: 

    • Guarantee more resources to civil society and technical community to increase participation in international governance forums 
    • Adopt bottoms-up regulation, specially in technical standards, such as AI, and ensuring global south countries participation. 
    • Ensuring openness and access to data in order to ensure meaningful participation. 
    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward

    Updated: Thu, 26/10/2023 - 20:35
    Digital Divides & Inclusion
    Key Takeaways:

    The session outlined that the GDVC has become increasingly complex and interconnected, with organizations and industries across the world collaborating and competing in the digital space which has transformed the way businesses operate, and how consumers access goods and services. Also, Africa is deficient at the GDVC as a result of low capacity, and technology to harness the available resources and poor taste for indigenous solutions.

    ,

    There is an issue of indigenous funds availability, the bulk of the available funding are from foreign venture capitalists with certain conditions and interests that keep Africa dependent digitally. Hence, the need for indigenous funding for digital independence in Africa Countries. In the same vein, speakers also commented on new approaches to digital infrastructure in the area of electricity, telecommunications, and data centers.

    Calls to Action

    Government with the support of other stakeholders should develop clear and supportive policies and regulations that prioritize local content and promote its integration into various sectors, such as energy, mining, manufacturing, and technology. The African government (Nigeria Communications Commission - NCC, National IT Development Agency-NITDA and other replicants in Africa) should explore massive investment in digital infrastructure.

    ,

    Private sector and other stakeholder groups should develop a crowdfunding mechanism to which indigenous investors and individuals could contribute. This would allow Africans to provide certain digital interventions that are controlled and benefiting Africa. A decisive and deliberate decision should be made to enhance capacity and positively engage the populace for the invention of solutions to our unique problems.

    Session Report

    AfICTA- Africa ICT Alliance Workshop Report

    IGF 2023 WS #311 Global Digital Value Chain: Africa’s Status and Way Forward, Thursday, 12th October, 2023, KYOTO, JAPAN

    Organized by: AfICTA-Africa ICT Alliance

    Overview of the Session: The discussion underscored the intricate nature of the Global Digital Value Chain (GDVC), where global organizations collaborate and compete digitally, reshaping businesses and consumer access to goods and services. Africa's lag in GDVC was attributed to limited capacity, inadequate technology to utilize available resources, and a preference for non-indigenous solutions. Challenges regarding GDVC's impact on Africa were discussed, emphasizing the continent's rich mineral and human resources for internet infrastructure. However, concerns were raised about retaining value within Africa. The session questioned Africa's exclusion in the value chain, emphasizing the need for increased value, consensus building, policy development, and active engagement in Internet Governance Forums. It highlighted Africa's consumption-centric approach and stressed the urgency of transitioning to a production-based economy. Critical questions were posed about Africa's ability to achieve sustainable development goals, accompanied by strategies to shift from consumption to production. The session emphasized the importance of creating a roadmap for capacity development, establishing production facilities, and enabling active participation in the global digital value chain.

    The onsite moderator, Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, Founder/Fmr. Chair and Chair of the Advisory Council, AfICTA, provided background information about AfICTA, an advocacy group for African ICT-driven businesses. AfICTA was founded in 2012 with six (6) member nations and has now grown to over 40 member African nations. Underscoring the importance of the theme of the workshop concerning Africa's participation in the global value chain, he introduced the panelists, the online moderator and facilitators, and the Chair of AfICTA, Mr. Thabo Mashegoane for opening remarks.

    Speakers

    1. Mr. Bimbo Abioye, President of the Institute of Software Professionals, ISPON and Group Managing Director of Fintrak Software, Nigeria (Private Sector, Africa)
    2. Dr. Kossi Amessinou, Chief of the World Bank Division, Benin Republic (Government, Africa)
    3. Dr. Melissa Sassi, representing the private sector in North Africa and serving as the Partner & Chief Evangelist, P3 Network (Private sector, North America)
    4. Mrs. Mary Uduma, West Africa IGF Coordinator (Civil society)
    5. Professor Kulesza Joanna from the University of North Poland (Academic community, Europe)
    6. Ms. Rachael Shitanda, AfICTA Vice-Chair, East Africa and Executive Member of Computer Society of Kenya (Private sector, Africa)
    7. Chief Toyin Oloniteru, CEO, DAPT - Data Analytics Privacy Technology; (Private sector, Africa)
    8. Dr. Chidi Diugwu, Deputy Director, New Media and Information Security, Nigeria Communications Commission (Government, Africa)
    9. Dr. Ben Ewah, Director of e-Government, NITDA - National IT Development Agency; (Government, Africa)

     Moderators

    1. Dr. Jimson Olufuye, Principal Consultant at Kontemporary Konsulting Ltd, Nigeria, and Founder/Fmr chair and chair of the advisory council, AfICTA. (Onsite Moderator)
    2. Mr. Inye Kemabonta, National Coordinator of AfICTA and CEO of Tech Law Development; and the Chair of AfICTA, Mr. Thabo Mashegoane for his opening remark. (Online Facilitator)

    Policy Questions to the Speakers

    The moderators posed the following questions to the speakers for their responses

    1. Considering that Africa is rated as a continent with the least contribution to the GDVC as evident through the dilemma experienced in the advent of the COVID-19: a. How inclusive is the GDVC and as a concerned stakeholder, what are the initiatives or actions required to take to amend the abnormal trend? b. Identify soft areas through which Africa could penetrate the GDVC and the benefits the continent would derive?
    2. Africa being home to major raw materials of production is yet with little or no contribution to the GDVC, what could have gone wrong, what are the remedies?

    Mr. Bimbo Abioye President of the Institute of Software Practitioners of Nigeria addressed the questions by highlighting the challenges faced by Africa in the Global Digital Value Chain (GDVC). He pointed out the lack of ownership and digital slavery in the continent's ecosystem. To address these issues, he emphasized the importance of enhancing policy frameworks, skills development, capacity development, research and development, and access to finance. Additionally, he stressed the need for infrastructural development and the creation of an enabling business environment across Africa. In his final submission, he envisaged the government leveraging existing solutions and existing capacity.

    Dr. Kossi Amessinou, Chief of the World Bank Division in Benin Republic highlighted the significant internet consumption from foreign countries but acknowledged a growing collective awareness in Africa, especially post-COVID. Despite this, challenges persist in the region. He proposed several solutions:

    Massive Investment in Digital Infrastructure: Dr. Kossi emphasized the need for substantial investments in digital infrastructure, especially from the private sector. He stressed the importance of broadband expansion into rural areas and advocated for new approaches to infrastructural development, including discussions on establishing data centers in Africa. Internet Exchange Points: He suggested building Internet exchange points across Africa to enhance local networks. Regulation: Dr. Kossi stressed the necessity of regulating the digital sector in Africa to ensure its growth and stability. Digital Literacy: Addressing the challenge of digital illiteracy, he recommended initiatives focused on enhancing digital literacy skills in the population. In his final submission, he envisaged capacity development and harnessing solar energy for Africa's own power.

    Dr. Ben Ewah, NITDA, emphasized the importance of understanding the existing structure of the labor market, especially the significant informal sector. He highlighted the need to identify specific areas where technology can address existing needs effectively. Focusing on interventions that cater for the majority of these needs will yield quick results for African markets. He stressed the government's role in recognizing the shift in resource utilization and harnessing of these changes for national development.

    Dr. Chidi Diugwu from NCC emphasized the vital role of Human Capacity Development, particularly concerning the inclusion of raw materials. He highlighted NCC's commitment to promoting research and development in the academic realm, with a focus on strengthening research grants for students in the field of artificial intelligence, given the transformative nature of the digital age. Dr. Chidi stressed the importance of identifying young talents, fostering their development, and increasing the number of skilled individuals to enhance the Human Development Index.

    Ms. Mary Uduma, West Africa IGF Coordinator representing the civil society emphasized the importance of Africa's grassroots participation in the Global Digital Value Chain (GDVC). She highlighted the discussions held at the IGF, both regionally and nationally, and stressed the need for Africa to be actively engaged in the value chain. Mary Uduma expressed concerns about Africa's dependence on the Western world during the COVID-19 pandemic and advocated for developing local businesses and voices within the continent. She praised Africa's achievements in the fin-tech sector, citing examples like Konga and Jumia. Mary Uduma called for the protection of human rights, advocating for standards and data safety. She questioned the location of data and emphasized the importance of housing data within Africa rather than relying solely on cloud services.

    Dr. Melissa Sassi from the Private Sector in North America highlighted the significance of tech entrepreneurship for Africa's economic growth. She emphasized the need to foster a culture of digital entrepreneurship, which plays a crucial role in Africa's capacity and economic development. Dr. Sassi stressed the importance of encouraging innovation, financial stability, practical skills, collaboration, and engagement. She advocated for integrating entrepreneurship culture into tertiary education and scaling up capacity-development efforts.

    Chief Toyin Oloniteru, CEO D.A.P.T, highlighted the importance of unbiased self-appraisal regarding Africa's strengths and progress. He emphasized the need to build on existing strengths and advance further. Chief Toyin pointed out the significant business expansions in Africa, citing examples like MTN and the banking sector, which have expanded beyond the continent. He stressed the need for behavioral modification, advocating for crowdfunding and crowdsourcing within Africa's resources. Chief Toyin emphasized the value of funding initiatives through crowdsourcing, promoting self-reliance and reducing dependency on external sources. The younger generation needs to be structured and guided to be focused on diverse opportunities available for skills development towards sustainable growth and development in Africa.

    Ms. Rachael Shitanda, Executive Member of Computer Society of Kenya, highlighted the need for Africa to leverage its resources for economic development and internet inclusivity. She emphasized the importance of developing local content, focusing on government initiatives. She shared perspectives with Mr. Bimbo Abioye on finance, creating enabling environments, local networks, and policy regulation. Ms. Shitanda stressed the importance of breaking silos, merging skills, and strengthening capital investment. She urged the continent to safeguard its data and collaborate effectively for growth and development.
     
    Prof. Joanna Kulesza, representing the Academia, emphasized the need for comprehensive and well-aligned regulations, coordinated and reliable capacity development, addressing policy challenges in Africa's global value chain, and aligning policies with sustainable development goals. She stressed the importance of civil society engagement, consistent policy development, raising awareness about broadband satellite, and resolving data-related questions. Prof. Kulesza highlighted the role of governments in ensuring increased African participation in the digital chain.

    She further emphasized the need to address policy challenges within the digital value chain, particularly in the African region. She highlighted the importance of aligning with the sustainable development goals with secure and stable internet access, enabling the development of technology based on accessible opportunities. Prof Coffin stressed the importance of awareness and recommended strengthening civil society engagement. She advocated for policy development through a multistakeholder approach, emphasizing that Internet access is a human right. Prof Coffin urged governments to consider jurisdiction, equipment ownership, and internet shutdown protocols during crises. Regarding data collection processes, she underscored the necessity for government involvement to enhance Africa's participation in the global value chain.

    Summary Recommendations 

    1. Governments, along with the support from various stakeholders, should formulate clear and supportive policies prioritizing local content integration in sectors like energy, mining, manufacturing, and technology. African governments, including entities like Nigeria Communications Commission (NCC) and National IT Development Agency (NITDA), should invest significantly in digital infrastructure.
    2. The private sector and other stakeholders should establish a crowdfunding mechanism where indigenous investors and individuals can contribute. This approach enables Africans to create digital interventions that are locally controlled and beneficial to the continent. A deliberate effort should be made to enhance capacity and engage the public in inventing solutions for our unique challenges.
    3. Africa needs a holistic approach to enhance its participation in the Global Digital Value Chain (GDVC). This includes investing in digital infrastructure, promoting indigenous solutions, and fostering digital entrepreneurship. Governments and private sectors should collaborate to develop clear policies, encourage local content integration, and invest in digital infrastructure. Additionally, there should be a focus on human capacity development, especially in emerging technologies like artificial intelligence. Identifying and nurturing talents among the youth is crucial for long-term sustainable growth.
    4. It's essential to mentor and empower the younger generation in the rapidly evolving digital landscape.
    5. African nations must enhance capacity development comprehensively across various sectors.
    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Updated: Thu, 26/10/2023 - 20:21
    AI & Emerging Technologies
    Key Takeaways:

    Digital empowerment is a priority and especially GenAI has a lot of potential in academic curriculum for young minds. By enabling acess via audio inputs, translation tools etc, GenAI can amplify an individual’s potential and increase the learning outcomes. But there are some academic concerns like the accepted levels of plagiarism, the impact on critical thinking etc.

    ,

    There is a strong need for strong cybersecurity measures: Use of GenAI by Youth and school students will require strong security and data privacy measures as it is prone to misuse. Privacy is a quintessential concern for a young person. By setting standards, sharing best global practices etc, we can successfully merge GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Calls to Action

    Policymakers need to take on an inclusive approach which can make use of GenAI more global diverse and inclusive of ethnicities, races, and local contexts. Diverse datasets, newer user-centric approaches that go beyond Euro-centric models with privacy in design is welcome.

    ,

    Educators need to collaborate with technical community, app developer, cybersecurity experts etc. to ideate on more inclusive GenAI

    Session Report

    Link to the report (PDF Version): https://drive.google.com/file/d/16QC9suOkn4ZBNzpkta8xZZl-Gg5KW5dM/view?…

    IGF 2023 WS #495 Next-Gen Education: Harnessing Generative AI

    Ihita G. welcomed everyone and set the context by highlighting the relevance of Generative (Gen) AI in education, underlined it’s use in personalized learning. She added that use of Gen AI further increases the importance on critical thinking and digital literacy and invited interventions from audience which primarily constituted on concerns around plagiarism in academic pieces.

    She introduced the speakers and invited Ms. Dunola Oladapo, a representative of inter-governmental organization to explore GenAI’s role in education. Ms. Oladapo argued that Digital empowerment is a priority for the youth. Covid 19 was a definitive moment for the history. The digital access is not uniform and about 55% of youth in Africa don’t have access to Internet. It has a multifold impact – lack of affordable devices, high internet costs etc. are some challenges which restricts young people to participate in a connected future with others.

    She shared ITU’s Generation Connect platform’s work on AI for Good– that it focusses on how young people are connecting with AI and explores different ways by which power of technology can be harnessed for a connected digital future.

    Ihita asked Connie Man Hei Siu and Osei Manu Kagyah (civil society) their opinions on responsible and ethical use of generative AI technologies in educational settings including algorithms, and the gaps that need to be addressed. Osei said that it’s an important conversation that’s long due as AI has impacted given that the industry is racing ahead of academia. He emphasized on the need for a human-centric approach and a mutual platform to address issues of accountability, bias and security of Generative AI in formal education.

    Connie shared her insights and highlighted the importance to explore as GenAI as can knockdown long-standing barriers like language and make learning more inclusive via translations, audio inputs etc. GenAI can also help student by help in managing schedules, increase learning outputs, connect with peers and reduce the stress of multi-tasking. She explored the challenges by emphasizing on the misuse of the tech:

    • higher degrees of reliance could hinder critical thinking skills of students
    • given that it required a lot of data, it can compromise user’s privacy
    • AI systems can inherit biases

    She underlined the need to promote responsible usage and vigilance since technology isn’t inherently good or bad.

    Ihita invited audience for interventions which featured concerns on the right kind of regulation, the need for an academic dialogue amongst PhD scholars and mentors on the extent of use of GenAI. Educators in the audience used example of calculator as a case wherein there was a possibility of hinderance of critical abilities but it further amplified mathematical models. 

    Ihita posed another question to the speakers - How can policymakers collaborate with relevant stakeholders to ensure that teaching and learning processes are enhanced while sustaining creativity, critical thinking, and problem-solving?

    Connie responded that policymakers required a thorough understanding on technology especially on leveraging GenAI’s power and safeguarding it. She suggested that it is important for policymakers to collaborate with stakeholders like students, teachers, academic institutions to understand the challenges. Further, to address challenges of data protection, and security infrastructure, the educators can team up with Teacher training institutes and tech companies. She highlighted that setting standards, sharing best practices globally can lead to successfully merging GenAI in education. It is a multifaceted challenge but the benefits outweigh the challenge.

    Online moderator Adisa agreed with Connie and contributed that the curriculum needs to evolve to address real world challenge. Ihita said need to do more assessment with respect to the models and asked Osei to respond. He argued that there is a need to uncolonized the designs since the deployment of AI tools have reflected a bias.

    Ihita posed the final policy question- How can policymakers ensure that the use of generative AI by youth in education is inclusive, age-appropriate and aligned with their developmental needs and abilities? She invited audience interventions on how can it be approached as a concern.

    A teacher from Finland expressed concern on who will create an inclusive model for children – as the approach of educators is different than that of a profit-earning company as the goal of inclusivity and protection needs to be aligned with learning. Another teacher from Japan added that GenAI model is US centric and there is a need to explore the local contexts.  Another audience member added that it’s not just about access to technology but also about knowledge e.g., the domestic context becomes important to understand what kind of data pool is being referred to. He referred to UNESCO’s open data report wherein the open science recommendation underlines knowledge sharing in sense of Global Commons.

    Ihita approached speakers for their final comments. Osei emphasized on the need more of interventions from different languages to move away from Euro-centric approaches. Connie suggested the need for stronger data protection laws and added with critical digital literacy skills, young people will have better skills to navigate the digital spaces. Policymakers need to take inclusivity driven approach like Personalized learning experience, accommodating the linguistic diversity etc. into consideration. Ihita concluded that young people need to take a stand themselves and contribute in the decision-making processes themselves to make the best of GenAI. She thanked everyone for joining.

     
       

     

    IGF 2023 Launch / Award Event #46 The State of Global Internet Freedom, Thirteen Years On

    Updated: Thu, 26/10/2023 - 20:19
    Human Rights & Freedoms
    Key Takeaways:

    • The multistakeholder model for internet governance is a crucial part of combating cyber threats, strengthening human rights and democracy online, and maintaining a global, open, free, and secure internet.

    ,

    • Laws governing the digital space that are developed in democracies can have drastically different and unintended consequences for people’s rights when imposed in less free contexts.

    Calls to Action

    • The Freedom Online Coalition should be more inclusive in its efforts to engage with civil society around the world.

    ,

    • Democracies should ensure that they are modeling rights-respecting legislation and regulatory approaches that will not restrict human rights online in less free spaces.

    Session Report

    Moderator Allie Funk began the session with an overview of findings from Freedom House’s Freedom on the Net 2023 report, which examined how artificial intelligence is deepening the crisis of internet freedom. She noted that AI drives intrusive surveillance, empowers precise and subtle censorship, and amplifies disinformation campaigns as generative AI lowers the barriers to entry for the disinformation market. She shared that if AI is designed and deployed safely, it can be used to bolster internet freedom. She closed by noting that as AI augments digital repression, there is an urgent need to regulate it, drawing on the lessons learned over the past 15 years of internet governance, namely: not overly relying on companies to self-regulate, centering human rights standards in good governance of the internet from governments, and the importance of involving civil society, particularly from the global majority. 

    Olga Kyryliuk discussed how the internet freedom space has changed in the last ten years. She described how initial hopes were that the multi-stakeholder model would make it easy to reach consensus on a way to regulate technology, and that ten years ago, many also felt that legal regulation would be able to catch up with technological advancement. She noted that, looking back, regulation has still lagged behind, but there is now a greater recognition of the importance of digital rights. She shared that innovations in AI and other technologies have brought new risks and opportunities, particularly when it comes to governments balancing their safety and security interests with protecting rights online. She closed by noting that continued multistakeholder collaboration is positive, but many people want more than just venues for discussion, but actionable results such as initiatives or partnerships that will lead to change. 

    Guus Van Zwoll discussed walking the tightrope of the “Brussels effect” and trying to ensure that regulations adapted by other countries with lower rule of law standards will not have adverse human rights impacts. He touched on the difficulty of balancing between fighting censorship and fighting disinformation. He described work done in the Netherlands to ensure that regulation incorporates strong requirements for transparency and references to the guiding principles on business and human rights, so that if other countries copy EU regulations, these considerations that were reached through a long multistakeholder process will already be baked into the laws. He noted that when the Netherlands has bilateral discussions, Dutch policymakers urge other government to adapt human rights and democratic clauses in their regulations.  

    Emilie Pradichit discussed the proliferation of harmful cyber laws throughout Southeast Asia that target dissenting voices in the name of security, and cases in which people in Thailand and Laos have been imprisoned for speaking the truth or sharing criticism on Facebook. She identified the lack of clear definitions for terms like national security as a problematic part of such regulation, and that voluntary commitments from tech companies do not do enough to counter such problems. She expressed that companies should have meaningful engagement with other stakeholders, both on how to prevent harm and to provide remediation after the fact, not just to tick the box of consulting civil society with no follow-up. She noted that digital rights organizations are small and cannot combat the misuse of platforms by governments on their own, but end up being told that companies cannot do anything either. She called for decisions about how tech companies and AI should be regulated to come from those who have been most impacted, through meaningful engagement that holds the powerful to account. 

    On multistakeholder engagement, Guus discussed efforts through the Freedom Online Coalition (FOC) and other initiatives, to incorporate and mainstream the Dutch Cyber Strategy among civil society groups, to ensure that while digital security remains high, there are principles for governments seeking to balance this with human rights, developing the governance structures to protect against a surveillance and censorship apparatus. 

    Olga commented on the desire among many in civil society for greater clarity about engaging in the FOC and other initiatives. She called for greater opportunities, in addition to the FOC advisory network, such as bringing back the Freedom Online Conference, as a venue for civil society to consult with FOC member governments on issues including AI. 

    Emilie emphasized that the FOC has not yet made itself accessible among civil society groups in Southeast Asia or other contexts across the majority world, where rights defenders are most under threat from digital authoritarianism and struggling under repressive governments. She pointed out the role that FOC governments could play in pressuring less democratic governments or companies that are operating in repressive contexts, particularly in cases where those still in-country are unable to speak out safely. 

    Olga added that getting access to government stakeholders at regional level IGFs and other meetings can be a challenge for civil society. She suggested that FOC governments should work to incentivize governments to engage with local and regional communities outside the global IGF, in order to develop partnerships and work together in a meaningful multistakeholder way. 

    Throughout the Q&A, panelists discussed the challenges for civil society in engaging with other global efforts, including the UN’s Global Digital Compact. Panelists also discussed the difficulty of ensuring that laws that are built on models from the EU, whether it be the DSA, DMA, or EU AI Act, still include the positive protections for human rights defenders without being imposing regulations that are overly burdensome and not responsive to local needs and realities.  

    Olga highlighted the importance of dialogue and conversations happening early on, before a law is drafted and adopted, to ensure that it is responsive to the local context, which sometimes requires advance capacity building as well. Emilie shared the frustration that civil society in Southeast Asia often feels with government-led regulation efforts, as there are few to no opportunities to engage. She noted that governments will say they are adopting global standards as a way to receive diplomatic applause, while still refusing to engage with human rights defenders or other stakeholders. 

    Guus noted that the Brussels effect was not always intended, and that although EU governments developed these laws, the way they have had global impacts was not something that was planned, which makes civil society feedback a crucial part of the learning process to improve the implementation of future regulations. 

    No feedback was received from remote participants during or after the session. 

    IGF 2023 Town Hall #61 Beyond development: connectivity as human rights enabler

    Updated: Thu, 26/10/2023 - 20:05
    Digital Divides & Inclusion
    Key Takeaways:

    It was highlighted by Robert Pepper that it’s possible to identify a shift in the lack of connectivity that went from a coverage gap to a usage gap. This means that, recently, the there was a improvement in the Internet coverage, and the main issue, now, relies in the Internet use by people who lives in regions that have Internet coverage

    ,

    Promises of universalizing Internet access through the 5G haven’t been materialized yet, and some sectors are already discussing the 6G technology. Internet fees, such as the fair share proposal, which may lead to a context of fragmentation, considering that a few companies would be able to provide a globally connected infrastructure. Zero rating agreements give an unfair advantage to large companies

    Calls to Action

    We call governments and intergovernmental agencies to reinforce the relevance of universal and meaningful connectivity as a fundamental enabler of human rights and elaborate on this relevance for the protection, promotion, and enjoyment of civil and political rights, in addition to economic and social development

    , We ask policy makers and govts to stand against imposition of direct payment obligations to the benefit of a few telecom operators. Current system has proven resilience and ability to evolve alongside the Internet. Considering roles of small, community and nonprofit operators in providing complementary connectivity for rural areas and minorities beyond sole reliance on incumbent infrastructure providers will sustainably address the digital divide
    Session Report

    Beyond development: connectivity as human rights enabler

    October 2023

    by session orgs: Raquel Rennó, Lucs Teixeira, and Nathan Paschoalini

    Introduction

    The 2030 Agenda for Sustainable Development explicitly recognises that the spread of information and communication technologies has the power to bridge the digital divide; as such, governments are increasingly addressing connectivity expansion as part of their efforts to meet the Sustainable Development Goals. However, framing connectivity solely as a facilitator for social and economic growth is limiting. These approaches ultimately privilege the most powerful telecommunication industries that can afford international agreements; if all connectivity is provided by the same few global incumbent telecommunication operators, there will be very little diversity in technologies, content, and little space for dissident voices.

    To expand on this issue and bring in different views, ARTICLE 19 organized a Town Hall session during the 18th edition of the Internet Governance Forum (IGF2023) in Kyoto, Japan. It brought together regulators, members from the private sector, the technical community and civil society to discuss the following questions:

    • Would it be possible to re-center connectivity as a human rights enabler, moving away from the development-only approach?
    • How can PPP and cross-national agreements help solve the digital divide while allowing the diversity in the ISP technologies, improving innovative policies and techniques to spectrum management instead of just promoting one specific industry?

    Moderated by ARTICLE 19 Program Officer Raquel Renno Nunes, the session included Jane Coffin (civil society), Thomas Lohninger (epicenter.works, civil society), Robert Pepper (Meta, private sector) and Nathalia Lobo (Ministry of Communication of Brazil, public sector). As online moderator, Lucs Teixeira (ARTICLE 19 Internet of Rights fellow, civil society) coordinated participants in the Zoom room; Nathan Paschoalini (Data Privacy Brazil, civil society) was the rapporteur.

    The full recording of the Town Hall session, with captions, is available at IGF’s YouTube channel: https://www.youtube.com/watch?v=MwlgWVXYFuo

    Discussion

    Before the discussion, the on-site moderator, Raquel Renno, stated that this Town Hall should be a space for an open discussion on connectivity issues, that enables different views on this subject, considering its importance as a human rights enabler. Then, invited speakers exposed their views on the questions raised above, with the opportunity for participation extended both to the on-site audience and to remote participants.

    After the panellists’ intervention, there was a open mic round, in which members of the audience and the panellists could debate the topics covered at the beginning of the panel.

    We split the points raised between three interrelated main problems.

    Problem 1: Building infrastructure

    Robert Pepper highlighted the fact that in the last few years, it was possible to identify a shift from a “coverage gap” to a “usage gap”. In this sense, more than 2 billion people could be online, but aren’t. He mentioned a project they conducted within sub-Saharan countries to understand the reasons why the majority of the population in this region are not online. In this study, they identified three main reasons for the issue, being a) affordability of devices and of monthly services; b) lack of digital literacy; and c) lack of local relevant content online. Another issue identified was related to the lack of electricity. He questions how to make people online, considering  that Internet access should be recognized as a human right and a human rights enabler. 

    Jane Coffin in her turn told us about how difficult it was to take fiber from Zambia to South Africa, mentioning negotiations between the countries borders, the presence of an historical bridge in the way, and a swarm of bees as obstacles in the more than 1 year period of deployment. This example serves the purpose of highlighting the difficulties related to Internet infrastructure and the barriers related to building Internet infrastructure in a cross border region. According to Coffin, it takes a multistakeholder approach to improve Internet access and to strengthen the dialogue with governments, so they can understand what has to be done to speed up Internet connectivity.

    She also mentioned that community networks come from a diversification of perspectives to bring in a last mile connectivity. In this sense, such networks can provide a type of Internet connection that is different from the ones provided by bigger ISPs, which don’t always have the economical interest to connect people in remote or otherwise impractical places. She states that building network infrastructure is usually very expensive, but there are alternative ways to build Internet infrastructure, especially if focused on smaller networks, and that different organizations can work together to achieve and improve Internet connectivity for those underserved publics.

    Thomas Lohninger acknowledges that all promises related to 5G, especially regarding connectivity, have not yet materialized; and despite this, discussions about 6G can already be identified.

    Nathalia Lobo presented the Brazilian context on the issues related to the universalization of Internet access in the country, due to the continental dimensions of Brazil. She mentioned that the Brazilian 5G auction was an opportunity to establish obligations related to the universalization of Internet access to the companies that won the process.

    She also presented a Brazilian public policy named Connected North, that was designed to strengthen connectivity to the northern region of Brazil through eight information highways composed of twelve thousand kilometers of optical fiber laid in the bed of the Amazon River. Lobo also mentioned that the public-private partnerships play a key role in the accomplishment and maintenance of the Connect North project. 

    Problem 2: Fair share proposals

    Thomas Lohninger address issues related to network fees, such as the fair share debate, which is not new, dating  back to the telephony era. According to Thomas, in this context, small ISPs have revealed that they are afraid of their ability to compete and to connect to other networks if such a proposal is approved, due to economical barriers. This, Thomas said, might lead to a fragmented Internet, where only large ISPs would have the financial resources to remain connected to the global network.

    Robert Pepper reinforces this critical view on network fees, explaining that the whole rationale behind them is based upon the architecture and economics of “Telecom Termination Monopoly”. With past network architectures, the distance and duration of connections increased costs substantially; after 4G arrived, with “essentially flat IP networks even in mobile”, the cost for connection is a step function and the duration or data exchange of that connection does not increase costs for the telcos unless they peak simultaneously.

    Problem 3: Zero-rating practices and Net Neutrality

    Thomas Lohninger mentioned issues related to zero rating such as Meta’s Free Basics, taking the Colombian Constitutional Court case as an example. He stated that zero rating contracts violate net neutrality, whose defense is deeply associated with accomplishing meaningful connectivity.

    Regarding this, Robert Pepper mentions a Meta project called “Discover”, which he describes as an evolution from Free Basics and instead of limiting the access to a selection of allowed websites, it limits all web pages to text, filtering images and video. Pepper mentions this as a solution that is not perfect but may serve as an “introduction to the Internet”, and as a way for people in prepaid packages to keep using the network even if degraded after the data package is over.

    Key takeaways

    1. Some of the panelists see a shift in the lack of connectivity that went from a coverage gap to a usage gap. This means that, recently, the there was a improvement in the Internet coverage, and the main issue, now, relies in the Internet use by people who lives in regions that have Internet coverage;
       
    2. On the other hand, some consider the lack of infrastructure still an important issue to address. It is conventional wisdom that building infrastructure is expensive, however there are strategies to lower this cost, which need a strong  multi stakeholder approach to address it.
       
    3. The mismatch between a business model aiming for continuous improvement for the fastest and the better, clash with the reality faced by many in the Global Majority. The promises of universalizing Internet access through the 5G haven’t been materialized yet, and some sectors are already discussing the 6G technology. New regulatory proposals, such as the fair share or tech toll, may lead to a context of Internet Fragmentation, where only the largest content providers would be able to accommodate the demands, still, only in the strongest markets where the investment would meet some return. The second concern was related to zero rating agreements, which give an unfair advantage to large companies.

    Next steps:

    Based on the discussion, the organizers of the session see there are different interests from the private sector and pressure on the public sector, which in some cases can overcome the needs of the people in most fragile condition. It would be important to:

    • Have governments and intergovernmental agencies to reinforce the relevance of universal and meaningful connectivity as a fundamental enabler of human rights and elaborate on this relevance for the protection, promotion, and enjoyment of civil and political rights, in addition to economic and social development.
       
    • We also call for regulators to adopt a human rights-based approach to national, regional, and local connectivity expansion and improvement plans. Considering the roles of small, community, and non-profit operators in providing complementary connectivity for rural areas and minorities beyond the sole reliance on incumbent telecom infrastructure providers to sustainably address the digital divide.
    IGF 2023 Open Forum #54 The Challenges of Data Governance in a Multilateral World

    Updated: Thu, 26/10/2023 - 19:36
    Data Governance & Trust
    Key Takeaways:

    Multilateralism and international organizations evolve as answers to cross-border challenges, enabling interstate cooperation to discover appropriate solutions

    ,

    A greater understand of national points of view might be an extremely helpful tool for international conversations

    Calls to Action

    It is a vital step for multilateral spaces to explore potential paths ahead in order to get an agreement on standard vocabulary for fundamental issues connected to internet and data governance.

    ,

    Further experience sharing may be beneficial in finding effective approaches that others might replicate.

    Session Report

    Organized by the Laboratory of Public Policy and Internet (LAPIN) and the Brazilian Data Protection Authority (ANPD), the panel focused on debating how the Data Governance theme has been discussed in the G7 and G20 forums. The session was moderated by José Renato (LAPIN).

     

    Mr. Yoichi Iida, from the Ministry of Internal Affairs and Communications of Japan, started the presentation declaring their active participation defending inclusivity and a multistakeholder approach in those forums. Iida exhibited how they focused on the subject of free flow of information across borders, proposing in 2019 their Data Free Flow with Trust (DFFT) project in 2019, on G20. Mr. Yoichi also affirmed that in G20, they use "human centricity" as the main terminology, as opposed to G7, which uses "democracy". He believes that data flow and IA governance as the most important themes in their agenda, with the challenges of privacy protection, interoperability and human rights protection. The Japan Government recognizes the diversity in jurisdiction and approaches, but the frameworks should be as coherent and interoperable as possible. 

    In the second part, Gaurav Sharma brings the Indian perspective of embracing technology and digitalization in G7 and G20 and the Data Protection Bill in India. He affirmed the need to focus on norms able to interact between sectors. For him, the digital strategies should be transparent, inclusive, secure and conducive to the sustainable development goals. To finish his presentation, he defended more participation from the Global South.

    Alexandre talked about the labor behind data production and microworkers. He mentioned the need for attention on Cloud Economy and the so-called gatekeepers, and how the multi-level approach can benefit the discussions around data governance. To conclude, Alexandre mentioned the importance of merging digital rights organizations with traditional social movements, even in G20 negotiations.

    Veronica Arroyo mentioned that the discussion about data governance depends where it is located, because of the difference between jurisdictions. Some of them have a very strong enforcement mechanism but in other cases, the country follows a more flexible approach. Thus, the design of data governance depends heavily on the policies and the priorities that the country has. She presented how the SDGs can be the core issue and how the commonalities help to meet those goals in different frameworks.

    Luciano Mazza specified how there are different approaches to data protection and data flows, and within the G20 there is a certain stabilization in conceptual terms on the discussion of data free flow with trust. Every country tries to bring their own issues and reality towards the theme and weight which ones are higher up in the terms of importance. Mazza explains how one of the reasons why this was not discussed more directly is that when we started this discussion on data governance in the G20, there was a way to balance a little bit the debate on free flow of data and potential concerns or constraints in terms of a more development oriented perspective. He speaks how there are two different approaches included that are complementary in a way but not fully articulated in the G20 debate, which are the data free flow with trust and data for development. From a developing country perspective, it may feel like the subject is not fully mature, but in Brazil's case, they recognize that the issue is crucial and of utmost importance. He presented four priorities: universal and meaningful connectivity, artificial intelligence, e-government and information integrity. He affirmed how he does not envision the discussion on data governance as a full front debate in those forums.

    Miriam Wimmer addressed some of the challenges posed by data governance and how it has been explored, the different approaches that have been proposed by multilateral organizations such as G7, G20, the UN and the OECD over the past years. She points out that we have been observing lots of discussions in many different proposals manifested through declarations, roadmaps, agendas based on concepts such as data free flow with trust. One of the main challenges, in her perspective, is to understand how these different proposals interconnect with each other, in which aspects they complement each other and in which cases they create tensions or gaps. Wimmer affirmed that another relevant aspect is how to make sure that all important stakeholders participate in these discussions, understanding that when we discuss the flow of data across borders, we are not only debating the interests of companies or states, but the rights of individuals. The discussion should take into account multiple perspectives, based not only on different approaches that the countries may have towards data protection, but also on the different interests of the various stakeholders affected by this discussion. She mentioned the actual debate in Brazil on international data transfers, in which the authority is facing the challenges of making sure that the mechanisms that are going to be established are interoperable and will allow for the protection of the fundamental right to data protection, regardless of where the data is actually located.

     

    IGF 2023 WS #198 All hands on deck to connect the next billions

    Updated: Thu, 26/10/2023 - 19:27
    Digital Divides & Inclusion
    Key Takeaways:
    The session underscored that connectivity holds immense transformative potential, yet it grapples with several impediments. These include the challenges of rural connectivity and the staggering negative impact of the digital gender gap. Technological and policy innovation is pivotal in closing the connectivity gap. Better metrics, realistic policy targets and collaborative, whole-of society approaches are essential to overcome the challenges.
    Calls to Action
    Governments must invest in infrastructure, accommodate rising demand for connectivity, and prioritize skills development. Stakeholders should work together to address barriers and bridge the digital divide through a partnership-driven approach. Other stakeholders should continue innovating to expand connectivity and form partnerships. A collaborative, whole-of-society approach is essential to achieve and reap the benefits of global connectivity.
    Session Report

    Introduction and key takeaways

    Meaningful connectivity fuels innovation, competitiveness, and sustainable growth for all, but despite numerous private ventures, intergovernmental agreements and multistakeholder commitments to advance universal meaningful connectivity, 2.6 billion of the world’s population remains unconnected. To bridge this gap, it is important to go beyond traditional approaches and encourage innovation, cooperation, and flexible solutions to connect the next billions.

    The session brought together policy and technology experts to discuss concrete approaches to scale up innovative solutions for universal meaningful connectivity, while fostering investment and cross-sector partnerships to unlock the potential of ICTs and digital technologies.

    Against this backdrop, the speakers addressed the multifaceted challenges that impede widespread connectivity, including the challenges of bringing remote rural areas online and addressing the digital gender gap, which if closed, is estimated to have the potential to add 3 trillion USD to global GDP annually. Furthermore, the discussions underscored the need for the development of more robust metrics for measuring inclusivity and the setting of realistic policy targets to connect underserved populations effectively. The crucial role of governments in enabling meaningful connectivity took centre stage, with a call for greater investment, readiness to meet the burgeoning demand for connectivity, and the imperative of nurturing the required skills.

    The speakers delved deep into strategies aimed at dismantling the various barriers obstructing universal connectivity, with a particular emphasis on the role of the private sector in bridging the coverage and usage gaps through innovative approaches.

    Overall, consensus emerged among the speakers, who highlighted the importance of a collaborative partnership approach in delivering universal, meaningful connectivity. It was emphasised that all stakeholders, including governments, private sector, civil society and academia, must come together to collectively address the challenge of connecting the next billions through a holistic whole-of-society, or ecosystem, approach. This unified effort is seen as the most effective means of ensuring that no one is left behind in the global digital transformation.

    Call to action

    Governments have a significant role to play by dedicating resources to develop essential infrastructure, catering to the rising demand for connectivity, and helping people to build digital skills. Collaboration among diverse stakeholders is vital to effectively bridge the digital divide, with a strong emphasis on fostering partnerships. Realising the benefits of global connectivity requires a comprehensive, whole-of-society approach. Moreover, it's imperative that all stakeholders can continue to innovate, which requires an enabling policy environment. When developing policy and regulatory frameworks, it is essential to recognise the value of the entire communication and digital services landscape. These frameworks should be unbiased, adaptable to different technologies, and supportive of innovative business models, diverse technologies, standards, and system architectures.

    Further reading

    International Chamber of Commerce (ICC), White Paper on Delivering Universal Meaningful Connectivity

    International Chamber of Commerce (ICC), Paper on Digitalisation for People, Planet and Prosperity

    IGF 2023 WS #197 Operationalizing data free flow with trust

    Updated: Thu, 26/10/2023 - 19:24
    Avoiding Internet Fragmentation
    Key Takeaways:
    The session highlighted the importance of horizontal, interoperable, and technologically neutral policy frameworks. Specific policy measures discussed included impact assessments, stakeholder commitments to prevent data fragmentation, support for encryption, and clear guidelines for government access to private sector data.
    Calls to Action
    Panellists emphasized that all stakeholders should prioritize international cooperation, common principles, and inclusive discussions to operationalize data free flow with trust and preserve the open, unfragmented essence of the Internet. Panellists called on policymakers to take a global approach that transcends regional boundaries, fostering trust, security, respecting human rights, and promoting innovation and economic growth.
    Session Report

    Introduction and key takeaways

    Global data flows are an essential engine for innovation, driving competitiveness, growth and enabling socioeconomic empowerment. However, mistrust in cross-border data transfers continues to grow due to concerns that national security, privacy or economic safety could be compromised if data transcends borders, leading to restrictive policies that deepen Internet fragmentation.

    The session touched on notable developments in promoting Data Free Flow with Trust (DFFT) through the OECD Declaration on Government Access to Personal Data Held by Private Sector Entities, and the G7 establishment of the Institutional Arrangement for Partnership (IAP). Against this background, speakers took stock of the innovative and empowering role of trusted global data flows, stressing that data only has value when it is accessible, useful, and able to be transferred. The discussion shifted to the challenges and risks posed by unilateral data governance policies and data localisation measures, including negative economic consequences and potential harm to human rights through surveillance.

    To address these challenges, participants highlighted the importance of horizontal, interoperable, and technologically neutral policy frameworks – noting that sound policies that enable data flows, while addressing legitimate concerns and aligning with international human rights law standards around security, privacy and commercially sensitive information, have the potential to create trust across the entire digital ecosystem. They emphasised the value of partnerships and inclusivity in policymaking, advocating for global approaches to data flows.

    Specific technical and policy measures referenced during the deliberations included data impact assessments, the commitment of stakeholders to prevent data fragmentation, support for encryption mechanisms as an enabler of trust and security, and the formulation of clear guidelines governing government access to private sector data. There was also the recognition that leveraging data for economic growth requires investment in wider infrastructures, including knowledge, skills and capacities to harness data. The session underscored the need for international cooperation and the creation of common principles, as well as inclusive dialogue on how to operationalise data free flow with trust and safeguard the open, unfragmented nature of the Internet.

    Call to action

    The session advocated for the establishment of universally accepted guiding principles concerning government access to personal data. Any principles should reflect and uphold international human rights law and standards. The panellists urged the adoption of a comprehensive global strategy that goes beyond regional confines, with a specific focus on building trust, enhancing security, upholding human rights and spurring innovation and economic growth. Common guiding principles could ultimately lead to effective collaborative efforts, involving all stakeholders and nations, to cultivate approaches that are interoperable and legally sound. This collaboration should promote the free exchange and responsible use of data in a manner that garners trust and upholds human rights, including high privacy standards. Policymakers were encouraged to endorse cross-border data flows while ensuring that users human rights are protected through transparent safeguards that are implemented in a manner that is non-discriminatory and does not create barriers to trade.

    Further reading

    ICC Policy Primer on Non-Personal Data

    ICC White Paper on Trusted Government Access to Personal Data Held by the Private Sector

    IGF 2023 Lightning Talk #102 The International Legal Dimension of ICTs

    Updated: Thu, 26/10/2023 - 19:19
    Cybersecurity, Cybercrime & Online Safety
    Key Takeaways:

    The legal aspects of ICT are only debated on the political level and not on the technical level.

    ,

    There is no mechanism for cooperation at the multistakeholder level that disputes this agenda.

    Calls to Action

    The unilateral standards need to be developed.

    ,

    The discussion on the development of a universal legally binding document needs to be continued.

    Session Report

    The final session was mainly focused on the effective application of international law in the telematics and ICT dimension. Due to the difference in the opinion on how the existing international laws need to be applied in cyberspace and if there is a need for the development of a legally binding international mechanism, the session speaker looked into this aspect through the lens of different regions of the world.It was noted that the discussion on this matter should certainly be continued. The problem is that the legal mechanisms never catch up with the latest technology. So the governments need to advance in the development of the appropriate legal dimension that, without doubt, will cover the 4th dimension.

    IGF 2023 WS #349 Searching for Standards: The Global Competition to Govern AI

    Updated: Thu, 26/10/2023 - 19:11
    AI & Emerging Technologies
    Key Takeaways:

    Different jurisdictions and organizations are taking diverse approaches to AI governance. These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

    ,

    Learning from previous internet governance experiences is crucial. While discussions around how to prevent AI from inflicting harm are important, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market.

    Calls to Action

    A global governance mechanism that coordinates and ensures compatibility and interoperability between different layers of regulation will be needed. But successful regulation at a national level is indispensable. National governments will be responsible for setting up the institutions and laws needed for AI governance. Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

    ,

    While industry standards are important, public-oriented regulation and a wider set of policy interventions are needed. As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise.

    Session Report

    Searching for Standards: The Global Competition to Govern AI

    IGF session 349 Workshop Room 1

    A global competition to govern AI is underway as different jurisdictions and organizations are pursuing diverse approaches ranging from principles-based, soft law to formal regulations and hard law. While Global North governments have dominated the early debate around standards, the importance of inclusive governance necessitates that the Global Majority also assumes a role at the center of the discussion.

    A global survey reveals diverse approaches. The European Union's AI Act is the most prominent process, but it is far from the only model available. Singapore is amending existing laws and deploying tools to help companies police themselves, while Japan is combining soft-law mechanisms with some hard-law initiatives in specific sectors based on a risk-assessment approach. The US is considering a similar approach as it begins to create the frameworks for a future AI governance structure. In the Global South, several countries in Latin America and Africa are actively engaging in the AI discussion, with a growing interest in a hard-law approach in the latter.

    These regulatory processes are critically important insofar as they will likely establish our framework for engaging with AI’s risks and harms for the coming generation. There is a pressing need to move expeditiously, as well as to be careful and thoughtful in how new legal frameworks are set.

    Different layers of regulation and governance strategies will be critical for creating a framework that can address AI’s risks and harms. First, because AI is a cross-border form of human interaction, a global governance mechanism will be needed to coordinate and ensure compatibility and interoperability between different layers of regulation. While this global layer could take the form of a soft law (declaration or recommendation), a more binding document (e.g., a convention) could also be considered as an effective way to coordinate AI regulation globally. From UNESCO’s perspective, a UN-led effort is critical, not only because AI requires a global multi-lateral forum for governance but also because unregulated AI could undermine other priorities like sustainable development and gender equality.

    Despite the need for global governance, successful regulation at the national level is essential. Ultimately, national governments are responsible for setting up the institutions and enacting and enforcing laws and regulations needed for AI governance.  Regional organizations, state-level regulation, and industry associations are all influential components of this ecosystem.

    At the same time, industry standards may be the most common form of regulatory intervention in practice. In such a context, the industry should consider developing responsible AI as part of their corporate social responsibility or environmental social governance practices - including the implementation of guidelines or principles on AI’s uses and development, codes of conduct, or R&D guidelines, since the way in which companies develop and use AI will have a huge impact on society as a whole.

    However, while it is important to raise industry standards and to involve companies in the regulatory process, we need to understand the incentives that drive these companies to work with AI, which is primarily to monetize human attention and to replace human labor. For that reason, industry standards should be complemented by public-oriented regulation, and a wider set of policy interventions.

    As for self-assessment and risk-assessment mechanisms, while they may become critical components of some AI regulatory structures, they may not succeed without sufficient enforcement to ensure that they are treated as more than just a box-checking exercise. It is also important to keep in mind that different approaches may only be relevant to specific subsets of AI, such as generative or decision-making AI systems.

    Small countries will face unique challenges in implementing effective AI governance. Small nations that regulate too quickly could end up pushing innovation elsewhere. These countries could establish their role in AI governance if they strategize and work together with like-minded initiatives or systems. While the deployment and design of AI are happening in the largest countries, we should be aware that AI will also be heavily used in other parts of the world.  Focusing on regulating not only the creation but also the use of AI applications will be key to the success of AI regulatory and governance experiences in small countries. Over the past decades, machine learning research and application have moved from public to private hands. This may be a problem, especially for small countries, as it shortens the speed of deployment from an idea to an application while limiting the ability of governments to restrict potentially harmful behavior.

    Learning from previous Internet governance experiences is crucial to AI governance. While we usually think about AI as if it is a brand-new thing, we need to think about its components and break down what exactly we mean by AI, including infrastructure data, cloud computing, computational power, as well as decision-making.

    We need to consider the impact of market power on AI governance, given that AI trends towards monopoly (large data, lots of computational power, advanced chips, etc.). While the discussions around how to prevent AI from inflicting harm are important, and issues of preventing exploitation are necessary, they will meet with limited success if they are not accompanied by bold action to prevent a few firms from dominating the market and various parts of the AI tech stack. AI governance  should focus on breaking down the various components of AI - such as data, computation power, cloud services, and applications - to redress monopolistic practices  and crack down on anti-competitive practices. This includes confronting consolidation in the cloud market and exploring public options. Regulators could also examine the possibility of forcing the handful of big tech firms that are providing the leading AI models to divest cloud businesses or eliminate the conflict of interest that incentivizes them to self-preference their own AI models over those of rivals.

    Another valuable lesson comes from the early regulation of the Internet in terms of copyright and freedom of expression. We need to think about to what extent the modeling of personal data protection laws and the current debate on platform liability should influence the debate on the regulation of AI’s potential harms. The first generation of Internet regulation left us with much stricter enforcement of intellectual property rights than enforcement of privacy rights, a legacy of early prioritization of the harms that were deemed most urgent decades ago, but which persists to this day. This should be instructive about the need to be deliberate and careful in selecting how harms are understood and prioritized in the current phase of AI regulation, as these technologies continue to proliferate.

     

    IGF 2023 WS #196 Evolving AI, evolving governance: from principles to action

    Updated: Thu, 26/10/2023 - 18:31
    AI & Emerging Technologies
    Key Takeaways:
    The session discussed existing AI guidelines, principles and policies. Speakers shared lessons learned from their development, adoption and implementation. They stressed the need for comprehensive, inclusive, interoperable and enabling policies that help harness AI’s developmental and socio-economic benefits, operationalize globally shared values and remain flexible enough to be adapted to local specificities and cultural contexts.
    Calls to Action
    Set comprehensive, inclusive and interoperable AI policies by meaningfully involving all stakeholders across all levels of the AI policy ecosystem: responsible development, governance, regulation and capacity building.
    Session Report

    Introduction and key takeaways

    AI, as a general-purpose technology, carries the potential to enhance productivity and foster innovative solutions across a wide spectrum of sectors, ranging from healthcare, transportation, education, and agriculture, among others. However, its design, development, and deployment introduce challenges, especially regarding the role of humans, transparency, and inclusivity. Left unaddressed, these risks can hamper innovation and progress, jeopardising the benefits of AI deployment, while undermining the crucial trust required for the widespread adoption of AI technologies.

    Against this context, the session convened a diverse panel of speakers who explored the current state of play in developing AI governance frameworks. The speakers recognised the progress of international efforts to guide the ethical and trustworthy development and deployment of AI. Notable examples referenced included the OECD's AI Principles, UNESCO's recommendations on AI ethics, declarations from the G7 and G20, the EU's AI Act, the NIST AI Risk Management Framework, ongoing efforts at the Council of Europe to draft a convention on AI with a focus on human rights, democracy, and the rule of law, the African Union's endeavours to draft an AI continental strategy for Africa, and a plethora of principles and guidelines advanced by various stakeholders.

    As AI continues to evolve, panellists suggested the need to harness its full potential for socioeconomic development, while ensuring alignment with globally shared values and principles that prioritise equality, transparency, accountability, fairness, reliability, privacy, and a human-centric approach. The panellists agreed that achieving this equilibrium will necessitate international cooperation on a multistakeholder and multilateral level. A key takeaway was the necessity for capacity building to enhance policymakers' awareness and understanding of how AI works and how it impacts society.

    The session recognised, among others, the merits of self-regulatory initiatives and voluntary commitments from industry, applauding their agility and effectiveness in advancing responsible AI development. The discussions advocated for interoperability of approaches to governance and suggested that any policy and regulatory framework must be adaptable and grounded in universally shared principles. This approach was seen as vital to navigate the ever-evolving technology landscape and to accommodate the unique demands of various local contexts and socio-cultural nuances.

    Overall, comprehensive, inclusive, and interoperable AI policies were recommended, involving all stakeholders across the AI policy ecosystem to promote responsible development, governance, regulation, and capacity building.

    Call to action

    There was a resounding call for comprehensive, inclusive, and interoperable AI policies. Such policies, drawing upon the collective expertise of all stakeholders within the AI policy ecosystem, can foster responsible development and effective governance of AI, as these technologies continue to evolve. This holistic approach would pave the way for a more responsible and sustainable AI landscape.

    IGF 2023 Day 0 Event #133 Aligning priorities for a shared vision on digital policy

    Updated: Thu, 26/10/2023 - 18:25
    Global Digital Governance & Cooperation
    Key Takeaways:
    This session emphasized linkages between different areas of digital policy, from the fundamentals of connectivity, to the governance of data, to understanding and mitigating the risks raised by AI. The panel stressed the importance of cross-ecosystem collaboration, with one panellist summarising the conversation by saying that all stakeholders are part of the same ecosystem, and the value of partnerships between the public and private sector.
    Session Report

    Introduction and key takeaways

    Our world has gone digital, transforming industries and economies globally. While this digital shift offers innovation and sustainable development opportunities, unilateral policies and governance can deepen inequalities and disrupt global economies, eroding trust in digital technologies. Numerous global organizations, involving multiple stakeholders, are dedicated to connecting divergent policy approaches and striving for worldwide, adaptable solutions. Their mission is to comprehend and regulate evolving technologies and leverage their potential for sustainable, inclusive socioeconomic progress.

    The session brought together government and industry representatives to discuss mutual priorities for advancing sustainable development through partnerships, as described in Goal 17 of the SDGs. It was arranged around three topics: AI and global AI governance, cross-border data flows and global data governance, and connectivity and digitalisation for development.

    The session started with panellists sharing views on the governance of AI. The conversation opened with the United States’ approach to addressing risks, both long-term risks (safety, security, threats to human existence) and short-term risks raised by current uses of AI (privacy, discrimination, disinformation, labour market). This includes securing voluntary commitments from leading companies. In turn, speakers stressed the need for domestic efforts to align with international initiatives like the Global Partnership on AI, the OECD, and the G7 Hiroshima Process – highlighting the importance of multistakeholder spaces of collaboration.

    The conversation then moved onto how the private sector is addressing those risks, with panellists highlighting the need for a robust governance framework, while running through some of the practical measures companies take to address AI risks. In addition, panellists suggested that the world is looking to both policymakers and businesses to respond to those risks, as action needs to be accelerated. In particular, panellists suggested that action was needed on three simultaneous fronts: global harmonised principles, standards and voluntary measures, and concrete regulation on a national level.

    The next segment of the session covered data governance, with panellists discussing how to create a world where data benefits everyone, including the challenge of aligning data governance with economic development. Other panellists highlighted private sector efforts for data free flow with trust and advocated for principles, privacy protection, and investment-friendly policies. The discussion underscored the importance of inclusive data governance to support a global digital economy.

    In the final segment of the session, speakers discussed connectivity and digitalisation for development. Government panellists emphasised the need for a multistakeholder approach in shaping the global connectivity policy agenda. Other panellists highlighted private sector efforts, suggesting that to meet ambitious connectivity goals we need greater investment and an enabling policy environment. Panellists also reflected on changing market dynamics and their impact on affordability and choice for consumers. The panel stressed the importance of cross-ecosystem collaboration, with one panellist summarising the conversation by saying that all stakeholders are part of the same ecosystem and rely on one another to connect everyone, everywhere.

    Ultimately, this workshop highlighted that there are many areas where governments and the private sector are forging great partnerships to resolve fundamental questions about how to govern digital technologies.

    Call to action

    The panellists underscored the need for cross-ecosystem development and an approach to policymaking which appreciates the interconnections and dependencies between different areas of digital policy. There was a consensus on the importance of securing voluntary commitments to address the risks associated with AI while ensuring the development of globally harmonised principles. With regards to data governance, the discussion emphasised the creation of data governance frameworks that are inclusive, transparent and build on trust.  The speakers also agreed on the importance of the multistakeholder approach in shaping global policy related to digital technologies. Finally, speakers underlined the need for increased investment in connectivity, the establishment of an enabling policy environment, and the promotion of cross-ecosystem collaboration to connect everyone, everywhere.

    IGF 2023 WS #465 International multistakeholder cooperation for AI standards

    Updated: Thu, 26/10/2023 - 18:21
    AI & Emerging Technologies
    Key Takeaways:

    Initiatives like the AI Standards Hub highlight the importance of bringing together expertise from across academic institutions, national standards bodies, and national measurement institutes for unlocking the potential of standards as effective AI governance tools underpinned by multi-stakeholder processes. It is key for such initiatives to link up, identify synergies, and pursue opportunities to coordinate efforts across countries.

    ,

    Increased international networking across civil society, academia, the technical community, industry, and regulators/government is critical for addressing capacity building, promoting participation from all stakeholder groups, and advancing global alignment in the field of AI standardisation. Efforts aimed at individual stakeholder groups have an important role to address the needs of groups currently underrepresented in AI standardisation.

    Calls to Action

    MAG should actively consider what the IGF can do to advance the promotion and collaboration on globally recognised AI standards (including technical measurement standards).

    ,

    Civil society, academia, the technical community, industry, regulators, and government should actively engage with AI standards initiatives, such as the AI Standards Hub, designed to advance multi-stakeholder input in AI standardisation.

    Session Report

    The session was dedicated to exploring the role that multistakeholder participation and international cooperation must play to unlock the potential of standards as effective AI governance tools and innovation enablers around the world. The workshop followed a three-part structure. The first part presented the work of the AI Standards Hub, a UK initiative dedicated to building a diverse community around AI standards through knowledge sharing, capacity building, and world-leading research. The second segment featured a panel of four speakers, bringing in diverse perspectives from across different stakeholder groups and geographic regions. The final part of the workshop centred on gathering questions and comments from the audience participating in-person and remotely via online platforms.

    Segment 1: Introduction to the AI Standards Hub. The workshop started with the introduction of the AI Standards Hub, a joint UK initiative led The Alan Turing Institute, the British Standards Institution (BSI), and the National Physical Laboratory (NLB). Dr Matilda Rhode, the AI and Cyber Security Sector Lead at BSI, began by introducing the mission of the Hub, explained the significance of standards for the evolution of the AI ecosystem, and provided a brief overview of standards development processes. Dr Florian Ostmann, the Head of AI Governance and Regulatory Innovation at the Alan Turing Institute, addressed the importance of stakeholder diversity in AI standardisation and provided a snapshot of the Hub’s work across its four pillars – (1) AI standards observatory, (2) community and collaboration, (3) knowledge and training, and (4) research and analysis. Finally, Sundeep Bhandari, the Head of Digital Innovation at NPL, discussed international collaborations pursued by the Hub with organisations such as the OECD, NIST and SCC, and outlined future collaboration opportunities for building international stakeholder networks, conducting collaborative research, and developing shared resourced on AI standards.  

    Segment 2: Panel discussion. Nikita Bhangu, the Head of Digital Standards policy in the UK government's Department for Science, Innovation and Technology (DSIT), started off the panel discussion by providing an overview of the UK government’s policy approach to standards in the context of AI. Referring to the recent AI white paper, Ms Bhangu highlighted the important role that standards, and other non-regulatory governance mechanisms and assurance techniques, can play in creating a robust set of tools for advancing responsible AI. Elaborating on the complexity of the standardisation ecosystem, she noted there are many barriers that stakeholders face in meaningfully engaging with AI standards and that it is vital for governments to support diverse stakeholder participation in standards development processes. Reflecting on DSIT’s policy thinking that led to the creation of the AI Standards Hub, Ms Bhangu noted that key aims guiding the initiative were to increase adoption and awareness of standards, create synergies between AI governance and standards, and provide practical tools for stakeholders to engage with the AI standards ecosystem.

    Following this, the international panel took turns to discuss the most important initiatives in AI standardisation aimed at advancing multistakholder participation, addressed questions on emerging stakeholder needs and challenges in different parts of the world, and discussed the importance of international collaboration on AI standards.

    Ashley Casovan, the Executive Director of the Responsible AI Institute, provided insights on Canada’s AI and Data Governance Standardization Collaborative from the perspective of civil society. She explained that the initiative aims to bring together multiple stakeholders to reflect on AI standardisation needs across different contexts and use cases. Wan Sie Lee, the Director for Data-Driven Tech at Singapore’s Infocomm Media Development Authority (IMDA), stressed that there is a widespread recognition of the importance of international cooperation around AI standards in Singapore. This is exemplified by Singapore’s active engagement in ISO processes and close collaborations with other countries. Elaborating on the city-state’s efforts to achieve international alignment on AI standards, Ms Lee pointed to Singapore’s AI Verify initiative, which closely aligns with NIST’s recently published Risk Management Framework. Aurelie Jacquet, Principal Research Consultant on Responsible AI for CSIRO-Data61, highlighted several Australian initiatives centred on advancing responsible AI, including Australia’s AI Standards Roadmap, the work of the National AI Centre and Responsible AI Network, and the development of the NSW AI assurance framework. These initiatives are dedicated to developing education programmes around AI standards, strengthening the role of standards in AI governance, and leveraging existing standards to provide assurance of AI systems in the public sector and beyond.

    Moving on to the topic of stakeholder needs and challenges, Nikita Bhangu pointed to the lack of available resources and dedicated standards expertise within SMEs, civil society, and governments, which often leads to these groups being underrepresented in AI standards development processes. Ashley Casovan highlighted similar challenges in Canada, where lack of resources in government teams is hindering the process of analysing the information collected by the Collaborative. Ms Casovan also pointed to the efforts of the Canadian Collaborative to include perspectives from all domains of civil society, as well as indigenous groups, to ensure that their input is taken into consideration when finding solutions to harms posed by AI. Wan Sie Lee noted that Singaporean government is trying to address the challenge of limited resources by focusing on areas where they can provide the most value to the global conversation, such as tooling and testing. Furthermore, to improve stakeholder diversity Singapore is making an active effort to include voices from the industry in its policy approaches. Finally, Aurelie Jacquet addressed the complexity of the standardisation ecosystem and the challenges stakeholders face in understanding the standards development processes. To address this challenge, she added, experts in Australia have focused on drafting white papers and guidance documents to help organisations in understanding how these processes work.  

    Talking about priorities for international cooperation, the panellists stressed that understanding the approaches taken by other countries is essential to avoiding duplication of work, building synergies, and understanding what kinds of coordination efforts are required. For this reason, multilateral fora like the OECD and IGF make for very important platforms. Additionally, initiatives like as the AI Standards Hub, were highlighted as important avenues for building networks internationally, identifying shared goals and challenges across different stakeholder groups, and jointly devising strategies to build an inclusive environment around AI standards.

    Segment 3: Audience Q&A. The final segment of the workshop provided an opportunity for attendees to ask questions, share their perspectives, and get additional input from the speakers. The session heard from the Coordinator of the Internet Standards, Security and Safety Coalition at the IGF, who stressed the importance of using standards that are developed by the technical community outside of government-recognised standards development organisations to inform national policies on AI. They suggested reaching out to the technical community in places like IETF or IEEE and align on key areas of AI standardisation. One of the online participants highlighted the value of further exploring strategies for increasing SME engagement in AI standards development. They proposed that this subject could be considered as a potential topic for inclusion in EuroDig, Europe’s regional IGF, taking place in Vilnius on 17-19 June 2024. The session also heard from an audience member representing Consumers International, who emphasised the value of consumer organisations in ensuring responsible AI development, since they represent the end users of these products and services. They stressed that consumer organisations possess a wealth of evidence to support standards development and can help to ensure that standards are firmly rooted in the real-life experiences and needs of their end-users. The participant also highlighted the AI Standards Hub as an important resource for Consumers International to increase their engagement in AI standardisation.

    IGF 2023 Open Forum #30 Intelligent Society Governance Based on Experimentalism

    Updated: Thu, 26/10/2023 - 17:27
    AI & Emerging Technologies
    Key Takeaways:

    1) The advent of AI is poised to lead to the evolution of social structures, economic production, and residential lifestyles, thereby giving rise to an intelligent society that empowers climate governance, healthcare advancements, educational management. It even aids in the realization of SDGs, thus infusing new impetus and resilience into societal development.

    ,

    2) The evolution of AI governance is transitioning from conceptualization to the phase of practical rulemaking and enforcement. Countries and regions need to strengthen their assessment of potential risks in AI development, establish robust legal frameworks to ensure healthy advancement of AI, and devise ethical norms for intelligent society governance.Global community should join hands to promote cooperation on intelligent society governance.

    Calls to Action

    1) The panelists agreed that, faced with the profound transformations brought about by AI technology, nations worldwide are exploring new values, concepts, and approaches for intelligent society governance. Constructing an intelligent society imbued with humanistic warmth has become the historical mission of contemporary humanity.

    ,

    2) The panelists advocate: the promotion of AI-empowered public services; the enhancement of assessment and prevention of potential risks in intelligent societies; the dissemination and application of governance principles and practices for intelligent societies; the active exploration of international standardization in the governance of intelligent societies; and the promotion of inclusive sharing and agile governance in intelligent societies.

    Session Report

    In today’s era, digital technologies represented by AI serve as the pioneering force in the global scientific revolution and industrial transformation, increasingly integrating into all aspects of economic and social development, thereby profoundly altering how society is governed. The iterative development of generative AI in 2023, with ChatGPT serving as a quintessential example, has once again fueled human apprehensions concerning the potential risks posed by an intelligent society.

    This open forum was hosted by the Bureau Information Technology Development Cyberspace Administration of China, with support from the Institute of Intelligent Society Governance of Tsinghua University and the Center for Science, Technology & Education Policy of Tsinghua University. Under the theme “Intelligent Society Governance Based on Experimentalism: Insights from Cross-Country Experiences”, the open forum invited six experts and scholars from government bodies, research institutions, and social organizations in China, the United Kingdom, and Brazil to engage in discussions on the practical impacts of AI applications in different countries and case studies about intelligent society governance, thereby fostering transnational knowledge exchange. The open forum is committed to identifying effective governance models, cooperation frameworks, and policy tools that will cultivate a sustainable and humanistic intelligent society.

    1. To further enhance the capacity building for intelligent society governance on a global scale, and to foster a humanistic intelligent society.

    The world is undergoing profound changes that are unprecedented in the past century, with issues such as wealth disparity, environmental and climate change, as well as regional conflicts, standing as common challenges faced by human society. Enhancing the capacity building for intelligent society governance, promoting the widespread application of intelligent technologies, and fostering a humanistic intelligent society are crucial measures to meet these challenges.

    Building a humanistic intelligent society requires vigorous efforts to empower public services with AI. Jiang Wang, Deputy Director of the Information Bureau of Cyberspace Administration of China, asserted that we should strengthen the integration of AI with public services such as elderly care, education, medical care, social security, and sports. Considering the need to safeguard and improve people’s livelihoods and create a better life for the public, AI should be utilized to enhance the public services and social governance levels of government departments. Simon Marvin, Professor at the Urban Institute of the University of Sheffield and Professor at the Sydney School of Architecture, Design and Planning, posited that, through examples such as Japan’s Society 5.0, Smart Dubai, and San Francisco, countries could actively explore how to construct regulatory systems while AI was rapidly developing, ensuring that AI better serves public domains such as healthcare and education.

    The panelists all agreed that, in the face of the tremendous changes brought about by AI, countries around the world are exploring new values, new concepts, and new directions for intelligent society governance. Constructing a humanistic intelligent society has become a historical mission for contemporary humanity, necessitating a collective effort to guide AI in a direction conducive to human society's development.

    1. To pay close attention to the social impact prompted by AI, strengthen the assessment of potential risks in AI development, improve laws and regulations to safeguard the healthy development of AI, and formulate ethical norms for intelligent society governance.

    The industrial revolution incited by AI will exert a significant influence on human production and lifestyle. Alessandro Golombiewski Teixeira, Special Advisor to the President of the BRICS New Development Bank, Distinguished Professor of the School of Public Policy and Management of Tsinghua University, and former Minister of Tourism of Brazil, emphasized that AI would lead to the evolution of social structures and alter the way humans interact socially. The resultant intelligent society will address a series of major challenges such as climate change and may facilitate the achievement of the Sustainable Development Goals (SDGs). Cui Huang, Professor at the School of Public Administration and Director of the Department of Information Resources Management of Zhejiang University, indicated that China’s practice of promoting the modernization of education and building a strong educational nation in the intelligent era, through the integration of digital technology and education, demonstrated that intelligent technology held immense potential in educational governance and could infuse new vitality and resilience into social development.

    The transformation into an intelligent society has brought about issues and challenges about legal privacy, moral ethics, and public governance in human society. Jun Su, Dean of the Institute of Intelligent Society Governance of Tsinghua University and Director of the Think Tank Center of Tsinghua University, believed that in the face of risks and challenges posed by new technologies represented by ChatGPT, we should adopt a prudent, confident, and proactive attitude, employ a scientific evidence-based approach to comprehensive assessments, and facilitate its benign development.

    Countries worldwide are actively conducting practices to utilize AI in addressing social problems and accumulating experience in the process, thereby forming a relatively complete regulatory system and ethical norms. As Zhiyuan Xu, Deputy Chief Engineer of China Academy of Information and Communications Technology, stated, the development of AI governance was transitioning from conceptualization to the actual rulemaking and implementation stage. Globally, countries are actively releasing AI rules and policies under governance objectives such as reliability, controllability, human-centeredness, fairness, and justice. The panelists call for refining global advanced experiences and practices, further promoting the concepts and practices of intelligent society governance, and actively exploring the international standardization construction of intelligent society governance.

    1. The international community should work together to promote exchange and cooperation in intelligent society governance, uphold the principle of technology for social good, and build a community with a shared future for mankind in the intelligent era.

    The panelists agreed that it was necessary to explore the path of intelligent society governance under the concept of building a community of human destiny, strengthening international exchanges and cooperation in intelligent society governance, and promoting inclusive sharing, agile governance, and the realization of differential development and win-win cooperation among countries. Jiang Wang, Deputy Director of the Information Bureau of Cyberspace Administration of China, pointed out that China is willing to exchange and share work experiences in intelligent society governance experiments with other countries, actively contribute to Chinese solutions, and learn from each other’s strengths and weaknesses. Jun Su, Dean of the Institute of Intelligent Society Governance of Tsinghua University and Director of the Think Tank Center of Tsinghua University, called on countries to strengthen academic cooperation and exchange in the field of intelligent society governance, widely hold open, diverse, and inclusive academic conferences, and publish related academic journals.

    The panelists all advocated that the international community should strengthen dialogue and exchange, calling on researchers and practitioners from different countries and academic fields around the world to join in the research and discussion of global intelligent society governance and make academic contributions to building a humanistic intelligent society. They hoped that countries could deepen pragmatic cooperation, jointly face the opportunities and challenges brought by intelligent technology, and work together towards a new stage of human civilization. They looked forward to everyone’s efforts to make the public in countries around the world pay more attention to the application and future of AI, and jointly construct a new chapter of a community with a shared future for mankind in the intelligent era.

    IGF 2023 DC-DNSI Closing the Governance Gaps: New Paradigms for a Safer DNS

    Updated: Thu, 26/10/2023 - 17:16
    Global Digital Governance & Cooperation
    Key Takeaways:

    There should be more coordination across the Internet ecosystem dealing with online harms, particularly to deliver proportionate responses that look beyond action at the DNS level. Asia and Latam offer good examples of (a)better coordination across the ecosystem and existing initiatives (e.g. operation of .kids by .Asia) and (b)capacity building between the DNS, content and tech communities, and policy makers, LEAs and the judiciary (LACTLD).

    Calls to Action

    The numbers and names communities as well as companies dealing with content need to actively build capacities with policy makers, LEAs and the judiciary to help them understand adequate and proportionate options for dealing with online abuse. The Internet ecosystem needs to have better coordination mechanisms in place that break away with industry silos, and build ecosystem-wide consensus and collaborations for addressing harmful content.

    Session Report

    The purpose of the session was to discuss governance gaps in achieving a safer DNS. There is a separation between structural layers of the Internet and content issues, and the ecosystem understands those lines. But when we talk about harmful content, sometimes those lines become blurred and governance gaps become evident.

    The conversation sought to discuss what to do about those gaps and how to be action-oriented.

    Keith Drazek from Verisign began by setting the scene. In his view, the ecosystem seeking to address DNS issues, security, and online harms has to recognise that each actor has different roles, responsibilities, and capabilities, whether that is a registrar, a registry, a CDN or an ISP. There are different governance models, for example the ICANN community and the gTLDs have governance by contract. ccTLDs, on the other hand, develop local governance models based on their relationships with local governments or local Internet communities. Hosting companies and providers are subject to the laws of their respective jurisdictions, and operate in response to that regulatory guidance. In the overlap of these various models, there are governance gaps still remaining. He believes there is an opportunity for better communication, collaboration, and good work across the various parts of the DNS ecosystem, up and down the stack. There is also a need for the technical operators, registrars, and registries to collaborate better together as a sector to mitigate online harms in a proactive way. This will help reduce costs and demonstrate to regulators that the industry is taking the initiative. Hopefully, this will also help avoid being regulated in a fragmented way when it comes to different jurisdictions.

    He also highlighted there are conversations to be had as well about the advent of blockchain, alternate identifiers and technologies, as there are governance gaps in that area as well that will require collective addressing by industry.

    Jia Rong Low from ICANN providing background on the role of ICANN supporting abuse mitigation. He explained how ICANN is governed by a multi stakeholder model, and how the community was adamant that the issue of DNS abuse needed addressing within the ICANN structure. At the time of the session, there is an open voting period to approve updates to the agreements between ICANN and contracted parties to incorporate specific actions to address DNS abuse as a contractual requirement. In his view, “sometimes with models such as ICANN’s, it can feel like things are not moving, but the community has come a long way.”

    Esteve Sans from the European Commission (EC) came in next with a perspective from a regulatory body. He started off by highlighting the bodies such as the EC are not just regulators, they are members of the multistakeholder community, and in the EC’s case, they are very active in ICANN.  The EC does not have any new regulation in mind, and is currently looking forward to supporting ICANN in what they see as “a moment of truth in dealing with abuse.” Esteve shared the view of the EC that amendments to the agreements of the contracted parties have not gone far enough, missing elements such as transparency or proactive measures. 

    Fiona Alexander from American University welcomed the reactivation of the DC as a safe place for conversation. She highlighted how DNS Abuse and what constitutes harm can mean different things to different stakeholder groups, especially governments. She went on to highlight jurisdictional differences in approaches. In some jurisdictions, there is preference for proactive approach (e.g. the EU), in others there is preference for having demonstrated harm versus preventive action (e.g the US). In addressing harm, governments also have to balance the important issues of free expression and human rights. Addressing online harm is a cross‑jurisdictional challenge that can be difficult to resolve. In her view, it is important to (a) have a shared understanding of terms, and a shared understanding of some of the challenges, (b) to look at the proportionality of the response, when do you take a small versus larger measure, and (c) who is best suited to take action. Voluntary commitments such as those reflected in the updated agreements of the contracted parties are good. They could also be more targeted, more rapid. When looking at voluntary action, it is really important to make sure there is transparency in those systems and that there is due process in those systems.

    Jen Chung from .Asia reflected on the contractual amendments with ICANN. She explained how the .Asia organisation is looking forward to using the trusted notifiers system in collaboration with APNIC and APCIRT and TWNIC to periodically identify risks and share lists. She pointed out how this type of collaboration with the ecosystem is important to tackle threats such as phishing, and highlighted how these are actions that go beyond their contractual obligations. She illustrated this point in connecting to the definition of DNS Abuse. “DNS abuse can mean different things to different people; for the the contracted parties it refers to malware, botnets, phishing and spam. But this is not intended to limit our sphere of work, we go above and beyond our contractual obligations.” Lastly, she concluded with call to action to include the CSIRTS in work related trusted notifiers, and mentioned that .Asia is discussing with APNIC and APCERT the possible set up of a South Asia CERT.She highlighted how in regions where there are no harmonised approaches –like in the European Union— the onus is on operators like .Asia and other organisations Internet organisations to step up to fill this gap.

    Rocio de la Fuente from LACTLD brought in a perspective from the ccTLD community. She explained how ccTLDs are not bound by the consensus policies formulated in ICANN, as their policies are based on local regulations established with their communities. She shared LACTLD’s experience organising workshops for dealing with illegal content and DNS abuse targeted at judges, prosecutors and Law Enforcement Agencies (LEAs) which has been co-organised with LACNIC, ICANN and the region’s technical community organisations.  The workshops have been successful in building cooperation networks with the judiciary and LEAs. “We see a positive impact when policy makers and LEAs can have direct conversations with their local ccTLD,” she explained. Private sector has also sometimes participated in the workshops to address issues related to illegal content on their platforms and services besides the DNS threats or DNS abuse issues.

    Jean Jacques Sahel from Google came in next to bring a perspective from the private sector dealing more broadly with content-related issues. Jean Jacques began by pointing out that from a Google perspective, the company is not trying not to be regulated; the internet has achieved a certain level of maturity and regulation is to be expected.It is rather a question of how, and understanding there is much of self regulation. He went on to share some lessons on how to tackle bad content, and take action on inappropriate behaviour. Google analyses content flagged to them by users or governments and follows content policies; in platforms like youtube, they demonetize bad content. They also seek to build out collaboration with relevant organisations. Honing in on APAC, there is a trend for increasing regulation –some omnibus regulation that concerns all intermediaries in APAC, some do “social media” regulation only. Back in the day they were copying regulations of other regulations, now it is changed — they add their own veneer. APAC is a very large market so this is bound to impact millions of users.

    Jean Jacques highlighted that the one thing he sees as lacking is policy makers seeking out for input from the multistakeholder community – the tech community, industry and civil society. “We can remind them, and some of us are raising concerns of collateral damage, massive collateral damage to the ecosystem, but it gets scant attention.” He concluded that regulation is coming, and that regulators will go for whoever can bring actions. From a DNS industry perspective, the Internet’s core has been spared, but not for long. He called for regulators to leave room for freedom of expression and not to over regulate.

    Esteve Sanz was invited to address Jean-Jacques’ points. Sanz said that the EU DSA offers an approach that strikes a good balance between users' fundamental rights and tackling abuse. In terms of coordination, he highlighted that the EC coordinated with the US on the declaration for the Future of the Internet, which he described as a straight jacket for the states not to regulate the internet in certain ways that are harmful. Lastly, he warned of digital authoritarianism in that authoritarian governments use the internet to control their populations. “We cannot think the internet is just a tool to promote freedom.”

    Keith Drazek elaborated on the point of industry collaboration. He finds industry does not collaborate sufficiently, especially when up and down the stack, or across the range of operators. There is an opportunity for registries, registrars, hosting companies, CDNs and ISPs to engage more constructively and proactively together, and to collaborate in identifying trends of bad actors and in devising mitigation strategies.

    He agreed that regulation is upon us, but urged for it to be informed, educated regulation and to take into account concerns by civil society. When we speak of content, it gets very complicated from a rights perspective. Registries and registrars have one option to address abuse, and that is to take the entire domain  out of the zone – the third level is with the hosting company. If it is a bit of offending content or harming content on a third‑level name or a website, the hosting company has to be involved in that conversation about how to mitigate those harms to ensure proportionality.

    He offered five points for consideration that apply to dealing with online harms through the DNS, and also for any trusted notifier schemes. These include consideration of:

    1. Provenance of a threat, to have the closest stakeholder take action
    2. Proportionality, to ensure actions do not impact users or other parts of the ecosystem disproportionately
    3. Transparency on how we use the DNS to mitigate online harms. What process was followed and what actions taken
    4. Due process
    5. Recourse –we need to offer recourse for the impacted party if you got it wrong,

    Fiona Alexander weighed in and said that what is unique about how the Internet operates is the multistakeholder model. So it is important that industry and governments do not broker on their own, but that conversations are held in reach of the multistakeholder community. “How you do something is just as important as what you do.”

    Connecting to regulatory efforts by the EU, Jia Rong Low highlighted the impacts of GDPR on the Whois database. He explained how he interacts with LEAs, and complaints by Interpol that Whois has become difficult to work with, highlighting how regulations have losers and winners.

    Jen Chung offered an example of how articulation can play out. .ASIA is the registry operator for .kids, which is one of the first gTLDs with a mechanism for restrictive content. She explained how downstream, there are the hosting providers, DNS resolvers; at every point, there could be abuse happening all the way to content. For DotKids, they rely on Google AI to look at the content, they have a policy that listens to child rights experts and online rights experts. They are also highly transparent and have the paper trail of how reports are dealt with. They also offer recourse. 

    Rocio de la Fuente came back with additional perspectives from the LAC region where there are no overarching regional regulations to harmonise approaches. She explained that while abuse in ccTLD communities is low, the ccTLDs have introduced actions to help mitigate abuse. For example, .co has a national hotline for reporting CSAM materials and mechanisms in place to review reports.  

    Some comments from the audience included Mark Dattysgeld who in response to the comments by the EC explained that the community came up with a technical definition of DNS Abuse which could be agreed upon as a baseline from which we can always build on. Kanesh from Nepal urged for capacity building. Andrew Campling asked whether new standards introduce new governance gaps, hinting at DoH.  

    In terms of what is required, the panel recommended

    More capacity building, as done in the LAC region, with governments, LEAs, operators, judges, policy-makers and other relevant stakeholders. Esteve warned that the conversation has been dominated by the global north, and that this will play badly in the WSIS+discussions.

    Bringing people together and having conversations. Continue the discussion about more coordinated action from the ecosystem, ensuring we get feedback from the multistakeholder community. Making it a sustained conversation.

    Clarity on what tools we have, what to scale. From a cooperation perspective, Jen Chung highlighted the need to Join the dots on the things we are doing, and scaling what works.

    More collaboration on DNS security, including involvement of CSIRTs.

    Measure DNS Abuse

    Be attentive to new standards being developed

    Takeaways

    There should be more coordination across the Internet ecosystem dealing with online harms, particularly to deliver proportionate responses that look beyond action at the DNS level.

    Asia and Latin America offer good examples of (a) better coordination across the ecosystem and efforts to build collaborations among existing initiatives (example of operation of .kids by .Asia) and (b) capacity building and networking between the DNS, content and technical communities on the one hand, and policy makers, law enforcement and the judiciary, on the other.

    Calls to action

    The numbers and names communities as well as companies dealing with content need to actively build capacities with policy makers, LEAs and the judiciary to help them understand adequate and proportionate options for dealing with online abuse.

    The Internet ecosystem needs to have better coordination mechanisms in place that break away with industry silos in dealing with online harms, and build ecosystem-wide consensus and collaborations for addressing harmful content on the Internet.

     

    IGF 2023 DC-OER The Transformative Role of OER in Digital Inclusion

    Updated: Thu, 26/10/2023 - 17:15
    Digital Divides & Inclusion
    Key Takeaways:

    To advance digital inclusion through OER, there is a need to go beyond awareness raising and digital skills to access, re-use, create and share OER, to focus on how to make OER more inclusive to the diverse needs of learners. It is important to not focus on a ‘one size fits all' strategy, but to have localized control of content to build a knowledge commons. Stakeholders need incentives to contribute and use this knowledge commons.

    ,

    The principle of the OER Recommendation that educational resources developed with public funds should be made available as OER is important. Investments should be in ensuring the quality of teaching and learning experiences, to ensure that OER is providing quality, accessible learning for all learners.

    Calls to Action

    Governments and Institutions need to support inclusive accessible OER initiatives that support the knowledge commons.

    ,

    Initiatives need to be led by the target communities, and the voices of those who will benefit from these initiatives have to be in the conversation. Best practices from other ‘Open Solutions’ – Open Access, Open Data can be useful for ensuring the interoperability of repositories and increased sharing of knowledge through OER.

    Session Report

    IGF 2023 DC-OER: The Transformative Role of OER in Digital Inclusion 

    Report 

    The IGF 2023 Dynamic Coalition on Open Educational Resources (DC-OER) convened a session under the theme "Digital Divides & Inclusion." In an increasingly interconnected world, access to quality education is paramount, but digital divide and inequalities persist. The IGF 2023 Dynamic Coalition on Open Educational Resources (DC-OER) addressed this pressing issue in a round table discussion, exploring the transformative role of Open Educational Resources (OER) in promoting digital inclusion. The session featured international experts and diverse stakeholders. 

    The UNESCO and IGF OER Dynamic Coalition showcased its dedication to promoting open educational content while respecting privacy, legal standards, and inclusivity. OER's potential to provide inclusive access to digital knowledge was a key highlight of the session. 

    The UNESCO OER Recommendation was the main focus of the session as the starting point for a common commitment and political will of Member States towards knowledge sharing through OER. This is the first international normative instrument to embrace the field of openly licensed educational materials and technologies in the UN System. 

    The Recommendation provides a clear definition of Open Educational Resources, namely that OER are learning, teaching and research materials in any format and medium that reside in the public domain or are under copyright that have been released under an open license, which allow no-cost access, re-use, re-purpose, adaptation and redistribution by others. 

    The Recommendation also underscored that an open license is one that respected the intellectual property rights of the copyright owner, while granting the public the rights to access, re-use, re-purpose, adapt and redistribute educational materials.   

    The five action areas of the 2019 Recommendation on OER were central to the discussion: capacity building, supportive policies, quality and accessibility, sustainability, and international collaboration.  

    The IGF 2023 session highlighted that OER is not merely a tool; it's a multifaceted solution that demands capacity building, supportive policies, quality, inclusion and accessibility, sustainability, and international collaboration to effectively bridge digital divides. These five pillars represent a collective commitment to unleashing the full potential of OER, empowering the digital era, fostering inclusion, and ensuring that equitable access to quality education is within reach for all. 

    Capacity building was emphasized as the foundation for effectively bridging digital divides by enabling educators and learners to create, access, and adapt OER. 

    Dr. Stephen Wyber of the International Federation of Library Associations and Institutions (IFLA) stressed the pivotal role of supportive OER policies, ensuring that educational resources are accessible to all, regardless of their digital environment. 

    Quality in OER was underlined as essential for meaningful learning experiences. OER should not contribute to inadequate learning, especially for marginalized individuals. Sustainability models for OER initiatives, including financial strategies, open procurement, inclusive policies, and ongoing community engagement, were highlighted as crucial for OER successes by Dr. Tel Amiel, UNESCO Chair in Distance Education at University of Brasilia (UnB). 

    For Dr. Patrick Paul Walsh, SDG Academy, UN Sustainable Solutions Network (UNSDSN), International cooperation emerged as a critical pillar of effective OER solutions, emphasizing the interconnected nature of digital education. 

    The session's insights and recommendations underscored the critical role that OER play in advancing digital inclusion, knowledge accessibility, and quality education for all. As the world continues its digital transformation, the power of OER is set to drive global change, ensuring that no one is left behind in the digital age. In line with the United Nations' Sustainable Development Goals, the session echoed the importance of implementing Member States-adopted standards for openly licensed digital education tools through the UNESCO Recommendation on Open Educational Resources (OER) 2019. It stressed the pivotal role of OER as a digital public good, ensuring an open, free, and secure digital future for all, aligning with the Global Digital Compact. 

    Key Takeaways: 

    • The importance of tailoring OER to diverse learners, avoiding a one-size-fits-all approach. 

    • The call for governments and institutions to support inclusive OER initiatives and promote the knowledge commons (Mr. Neil Butcher, OER Strategist, OER Africa). 

    • The necessity of community-led initiatives with input from those they aim to benefit. 

    • The session emphasized the role of governments, institutions, and communities in supporting inclusive and accessible OER initiatives, ensuring quality education for all. 

    • Capacity building is of paramount importance spanning from creating awareness and optimizing resource utilization to promotion of inclusivity. In alignment with the Recommendation on OER, addressing resource scarcity through the allocation of public funds for openly licensed educational materials and incentivizing educators to embrace OER is a crucial takeaway from the session has recalled Dr. Melinda Bandalaria, Chancellor and Professor, University of the Philippines Open University 

    Call to Actions: 

    The session emphasized the role of governments, institutions, and communities in supporting inclusive and accessible OER initiatives, ensuring quality education for all. To realize these objectives, active support for inclusive and accessible OER initiatives was urged. This support should extend to the diverse needs of learners, the promotion of the knowledge commons, and the assurance of quality education for all.  

    The IGF 2023 DC-OER session serves as a reminder that OER is a catalyst for bridging digital divides and fostering digital inclusion. It's a call for collective action to make digital education truly inclusive and accessible to all. 

    In conclusion, the IGF 2023 DC-OER session highlighted the transformative potential of OER in bridging digital divides and fostering digital inclusion. The insights and recommendations from the session provide a roadmap for achieving these vital goals in an ever-evolving digital landscape.