IGF 2021 WS #68 AI Ethics & Internet Governance: Global Lessons & Practices

Time
Friday, 10th December, 2021 (12:45 UTC) - Friday, 10th December, 2021 (13:45 UTC)
Room
Conference Room 4

Organizer 1: Bu Zhong, Pennsylvania State University
Organizer 2: Xingdong FANG, College of Media and International Culture, Zhejiang University

Speaker 1: Kevin Martin, Private Sector, Western European and Others Group (WEOG)
Speaker 2: Xianhong Hu, Intergovernmental Organization, Western European and Others Group (WEOG)
Speaker 3: NADIRA AL-ARAJ, Technical Community, Asia-Pacific Group

Additional Speakers

Matthieu Guitton, Ph.D., professor and secretary of the Faculty of Medicine chez Université Laval, Canada

Iva Georgieva, Ph.D., Researche, Institute for Advanced Studies in Varna, Bulgaria

Amit Sharma, Ph.D., professor and director of Food Decisions Research Laboratory, Pennsylvania State University, USA
 
Lu Wei, Ph.D., professor and Dean of College of Media and International Culture, Zhejiang University, China
 
Lola Xie, doctoral student, Pennsylvania State University
Renata Carlos Daou, International student from Brazil, Penn State University
 
Moderator

Bu Zhong, Civil Society, Western European and Others Group (WEOG)

Online Moderator

Yuanyuan Fan , Civil Society, Asia-Pacific Group

Rapporteur

Bu Zhong, Civil Society, Western European and Others Group (WEOG)

Format

Birds of a Feather - Auditorium - 60 Min

Policy Question(s)

Inclusion, rights and stakeholder roles and responsibilities: What are/should be the responsibilities of governments, businesses, the technical community, civil society, the academic and research sector and community-based actors with regard to digital inclusion and respect for human rights, and what is needed for them to fulfil these in an efficient and effective manner?
Promoting equitable development and preventing harm: How can we make use of digital technologies to promote more equitable and peaceful societies that are inclusive, resilient and sustainable? How can we make sure that digital technologies are not developed and used for harmful purposes? What values and norms should guide the development and use of technologies to enable this?

This workshop deals with several key policy questions listed by IGF, such as emerging regulation, internet governance, social inequality, sustainable development, and digital policy. It also emphasizes economic and social inclusion of people with limited knowledge of AI and AI ethics.

SDGs

9.4

Targets: This workshop is highly related to internet governance by searching for new model to address the challenges emerging in AI ethics. A team of experts in this areas will share their insights on how to apply ethical principles to AI applications and autonomous systems, in which an important source for developing AI ethical principles are human rights principles.

Description:

The pervasive use of social media and mobile apps indicates that people are growingly seeing the outside world through the lens of artificial intelligence (AI) as human societies have rapidly adopted AI-powered autonomous systems and algorithms that have a significant impact on billions of AI users. To most users, however, such AI-based systems are simply “black boxes,” leading to massive information asymmetries between AI developers and users or policymakers. This workshop aims to call for treating AI ethics as part of internet governance in order to help bridge the AI-related digital divide. AI ethics, including robot ethics or roboethics, comprise a set of values, principles, and techniques which employ widely accepted standards of right and wrong to guide moral conduct in the development and deployment of Artificial Intelligence technologies. The speakers will discuss concerns and moral dilemmas such as whether AI and robots will pose a threat to humans in the long run, or whether using some AI and robots like killer robots in wars can become problematic for humanity.

This workshop calls for AI developers to design AI-powered systems in consideration of ethically acceptable behavior in situations where robots, algorithms and other autonomous systems like self-driving vehicles interact with humans. This requires AI developers to apply high-level ethical concerns to all types of AI applications and systems, in which an important source for developing AI ethical principles are human rights principles. As a result, AI-driven actions and products must be assessed according to ethical criteria and principles. For example, when an insurance company charges a certain group of people higher premiums based on the data of age, gender or race, such practices would be violating the ethical principle of equal or fair treatment. Thus, this workshop considers internet governance models for addressing the AI ethical challenges by addressing information asymmetries and searching for social consensus. More importantly, this workshop should produce insights concerning more advanced internet governance models such as active matrix theory, polycentric governance, hybrid regulation, and mesh regulation, which provide both inspiration and conceptual guidance on how a future internet governance model should be developed to promote AI ethics. This workshop is highly related to internet governance by searching for new model to address the challenges emerging in AI ethics. A team of experts in this areas will share their insights on how to apply ethical principles to AI applications and autonomous systems, in which an important source for developing AI ethical principles are human rights principles.

Expected Outcomes

Thus, this workshop considers internet governance models for addressing the AI ethical challenges by addressing information asymmetries and searching for social consensus. More importantly, this workshop should produce insights concerning more advanced internet governance models such as active matrix theory, polycentric governance, hybrid regulation, and mesh regulation, which provide both inspiration and conceptual guidance on how a future internet governance model should be developed to promote AI ethics.

This workshop provides the opportunities of both virtual and in-person participation. Some stakeholder groups will be invited to join the discussion by the rapporteurs in the coming months.

Key Takeaways (* deadline 2 hours after session)

when you use AI ethics fundamentally and we need to pay attention to how to make good use of the data that we have. The capability of processing the data is extremely essential for us, for AI ethics transparent. The evolution of AI is affecting human perception, cognition, and interaction actions what will in AI impact our concept of humanity,

how AI is also will increase inequality between countries,there are three solutions: 1. through the international organization, like the United Nations can do something to reduce the divide; 2. multilateral collaboration between different countries, 3. underdeveloped countries, they should make more investments in both hardware and also software, especially by increasing the education around this new technology area,

Call to Action (* deadline 2 hours after session)

it's not just the company's responsibility to prevent things from happening again. we need everybody to take part in this great process, the governments, the big companies, and media organizations, international organizations, and most importantly the general public, the individuals, and everybody including us, we need to play a role in this process.

Session Report (* deadline Monday 20 December) - click on the ? symbol for instructions

Some ideas raised by speakers

  1. China has entered intelligent media age, Intelligent divide is emerging, yet AI is not a panacea for media industry, there are three risks deserve more attention: The risk of info cocoons received most concern from users. Privacy invasion and value erosion were perceived as challenges as well. To resist the above mentioned risks government needs to enhance AI governance, media should also play a role on maintaining humanistic values, for the public it is important to seek reasonable behavior. The core of AI ethical issues is data, the order of data and the basic rules of data usage. Institutions are an important public good, it is necessary to learn from each other and keep interconnected. As for AI ethics and governance, it is crucial for us to closely work together and build new mechanisms from communication consensus to institutional co-construction.
  2. It is necessary for us first to identify what status and rights we will give to AI, which depend on how we see AI, as partners or as tools.
  3. AI has solved the problem of labor shortage, and a large amount of repetitive work can be replaced by AI. But in the meantime, AI and the algorithms behind it have reduced people's right to make their own choices to a certain extent, and small and medium-sized enterprises are unable to compete with large enterprises in AI due to the lack of resources and capital, which further reduces the competitiveness and development opportunities of small and medium-sized enterprises.
  4. Young doctoral student from the USA shared her ideas on Algorithm transparency influences people's use of social media for health information. In the past, people would gather different information from different websites for a disease, now with the help of AI they will receive all the information about the disease on social media with just one click. The algorithm will recommend relevant information based on your search and thus get cocooned by the algorithm. As users we need to know the algorithmic rules that determine the information we see on social media, researchers need to understand the potential risks and push for transparency, and social platforms need to explain the basis for their use of algorithms and rebuild them to truly serve the public good.

Participants` ideas:

  1. Participants on-line suggested that the main issues involving the ethics of artificial intelligence revolves around inequality, another one is humanity.
  2. Participants onsite raised his theory about two containers, when we talk about AI ethics, we have to first go into the core of how the ethics are derived. Ethics are the core values which come either from the society, culture, or religion. So while addressing the AI ethics, we can put these ethics into two big containers. One container can be that there are universal ethics, which are universally globally adopted by all the societies and cultures. And then there are regional ethics. We have to first figure out which are those and then feed them to some AI. One should also keep in mind that the universal ideas can be biased by the bigger players that have a greater benefit to derive out of the technologies.

Questions:

Some participants concerned about whether AI would further inequality between different countries.

Speakers responded that's not a new phenomenon because even for old technology like Internet, satellites, there was a very big and significant global digital divide. In the era of AI technology, there are three solutions worth consideration: 1. international organization, like the United Nations can do something to reduce the divide; 2. multilateral collaboration between different countries, 3. underdeveloped countries should make more investments in both hardware and software, especially by increasing the education around this new technology area.

Summery: Every and each one of us need to take part in this great process, the governments, the big companies, and media organizations, international organizations, and most importantly the general public, the individuals, and everybody including us, we need to play a role in this process.