IGF 2021 - Day 4 - WS #68 AI Ethics & Internet Governance: Global Lessons & Practices

The following are the outputs of the captioning taken during an IGF virtual intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> We all live in a digital world.  We all need it to be open and safe.  We all need trust.

>> And to be trusted.

>> We all despise control.

>> And desire freedom.

>> We are all united.

   >> BU ZHONG:  All right.  The right time.  You know, we start for now and we can start.  Thank you so much for the wonderful, wonderful panel of experts that I asked to join us and thanks to the audience, and feel free to join us any time.  And when we designed this topic back in last year, ethics and Internet governance and global lessons and practice, we did not AI was going to become such an important and critical area that people would have an interest in.  Online we have a lot more people joining us here.

Without further ado, and I will ask the first speaker and to come and tell us and share his ideas about intelligence and media use in China and challenges and implications.  Welcome Professor Lu Wei, Dean of college and media and international culture at Zhejiang University.

   >> LU WEI:  Thank you so much.  It's great pleasure to meet new and old friends of the Internet.  Since I have only five minutes, I would like to very briefly report some key findings of our study about intelligence media use in China.  So, basically, we found that China has entered the intelligence media age since last year because according to our survey, more than half of the respondents have used intelligent media to receive news and also spend more than 1,000 R and B on intelligent media products.

Second, we found intelligence divide is emerging.  We found that the use of intelligent media delivers among different age groups, educational levels, income levels, and also urban versus rural background.  So, we can conclude that the intelligence divide has become a new form of digital divide.

And third, according to our survey, AI technology is not for media industry and rather than enhancing counters of generallism, AI has stronger impact on the forms and so three risks deserve more scholarly attention.  The first and the most important concern from our respondents is the concern of the so-called information cocoons and privacy invasion and also value erosion were perceived as changes as well.

So, based on these key findings, I think we can make some suggestions for government, for media, and for individuals.  So, for the governments, we think the governments need to enhance AI governance by law, by ethics, and also by new technology.  For example, this September, China's government has established a new law about data security, so I think the law may be one of the most important measures we should take to guarantee the reasonable developments of AI technology.

And second, we think for the media, the professional journalism, the media people, they should maintain humanistic value.  For example, we don't think AI technology should replace humans, but well also they should empower humans.  And the media people, they should pay more attention on the quality of information rather than on the quantity of information, and also journalism mass media, and also the intelligence media, they should show stronger responsibility in the content, production, and also the design of new products.

Finally, we believe that the general public should seek more reasonable behavior, for example, individuals need to enhance intelligence media literacy, including knowledge, including skills, and also critical ability about their intelligence and media use.  Also, the individuals need to reduce technology dependence.  Well, technology is good because they facilitate a lot of things right now in our daily life, but we should not be overdependent on new technology.

Finally, we think the general users, they should fight against overconnection.  By Internet connection we can make the whole world become a global village; but also, we need to keep something offline.  So, we should not move everything online.  We should leave a blank in the process of digitalization, so that's basically some of the suggestions and implications we have from our study about the intelligence media use in China.  So that's all I have to say right now.  I welcome any feedback and questions later.  Thank you, Professor Zhong.

   >> BU ZHONG:  Thank you, Professor Wei.  I think it's very good.  We here at IGF, we do not use mostly, we do not use PowerPoint and you know Professor Wei --

   >> LU WEI:  Sure, I think that's better.

   >> BU ZHONG:  He had a wonderful PowerPoint, but I really could share your concern and your wishes here.  Okay.  That's so great.

Let's move to the next speaker which has come from Canada, Professor Matthieu Guitton from University of Laval in Canada, in Quebec.  You will see a recorded video.  Please share your screen including audio, please.  No audio.  Can you reshare it again?

   >> YUANYUAN FAN:  Can you hear that?

   >> BU ZHONG:  No.  Let me do it.  Let me take control of this.  Yuanyuan, can you stop sharing?

   >> YUANYUAN FAN:  Okay.

   >> BU ZHONG:  All right.  I'm going to share his talk.

   >> MATTIEU GUITTON:  Hello, everybody.  My name is Matthieu in from Canada.  First, I want to say it's an honor to be here with you today and I send Professor Zhong, thank for the invitation to present in the workshop.  I talk briefly, a few minutes about the problem of status and rights of regions and questions of ethics and governance.  I will mostly ask questions today rather than providing answers, because what we have to do now is further debates and discussions.

One of the fundamental issues, if we consider the possible and likely impacts of artificial intelligence on governments and alleges ethical issues likely to arise from that is the very question of what will be the nature, we've already grown of artificial intelligence in plurally.  There is not one count.  There can be several types of artificial intelligence with some degree of autonomy similar to what we understand as regions.

What I mean by that is what will be the status of artificial intelligence?  What will be the degree of autonomy that we allege sent to us and that we have sent to us?  That is really what is important.  It's what we acknowledge in terms of status.  And related to that, what will be the responsibility, accountability, and ultimately their rights?

In other words, do we want artificial intelligence to be mere tools, very complex, very powerful tools.  I don't think they are not powerful, very powerful tools, but still tools.  And in this case, what is the amount of our own freedom that we are ready to let go in the quote/unquote end of artificial tension.  This is somewhat autonomous.  O or do we want artificial intelligence to become partners instead of tools?  Which would mean at some point grants rights like human rights, artificial intelligence human rights, and putting ourselves in a frame of mind that would that would ultimately allow us to have a dialogue between humans and artificial intelligence.  In other words, to approach artificial intelligence in a way that might result in the rise of artificial humans, at least in terms of ethics, status, and rights.

Let me finish with that, as I believe it does provide some ground and some context for also the questions that the other panelists will bring up in this workshop.  So really, what will be the status and the right that we will give to artificial intelligence, and as a result of that, will they remain a tool or will they become partners?  Thank you very much for your attention.

   >> BU ZHONG:  Okay.  That's you know our second speaker, and who came from Canada.  I do share his concern.  We actually have like more questions than answers when talking about AI.  So next we'll invite Professor Amit Sharma, who is also the Director of our Food Decisions Research Laboratory, and he will tell us his title is Technology Transparency and Informed Choices in Food and Agriculture for Sustainable Development.  Professor Sharma.

   >> AMIT SHARMA:  Thank you.  It's an honor to be here and included in the title is our area in research which is food and agriculture, particularly in the food service system is what we call Food Away from Home as you all might be aware that has increased tremendously around the world and in the United States Food Away from Home prior to COVID-19 was more than the money we spent on food at home.

There are similar trends around the world, so the importance of how we act in that food environment that's outside of the home is extremely important.  Recently, my colleagues and I, we did a paper on the ethics in the food service system, and basing our discussions on the model which essentially looked at three aspects of ethics, autonomy, justice, and well-being, and what I want to share with you today is in the context of autonomy.  AI, obviously, has been extremely valuable for both the consumer as well as the firms, so for instance AI technology such as smart assistant, bots, they can be very helpful.  Case in point you pick up an app and you want to order food.  The process has become extremely convenient.  It reduces a lot of the effort and cost aspects on the consumer and makes it very convenient.  However, there are obviously concerns.

So, one of the other things on the supply side or the provider side is repetitive tasks, and as you are aware that there is a huge labor shortage that we are facing.  And we really need to relook at some of these job profiles.  So, AI can help in that reduction of the repetitive jobs so that we can elevate the competency levels of the jobs that we really need for value-add.

The concerns that I want to share with you in context much autonomy is exactly what the topic of the discussion is, choice.  So, what AI does is it reduces the choice set, literally, and that can be so subtle guidance to some level is good, but a repetitive subtle guidance can guide us in one particular -- in one particular direction.  So, one of the things that we talk about a lot in the paper is the idea of nudges, and that's a more paternalistic approach to, you know, for someone else to decide on what you would like to -- what you would like to prefer or choose.

So, on the consumer end, we certainly think that there is this ethical issue related to the nudges.  On the supply or offer side, most of the industry is around small businesses.  One of the biggest challenges of small businesses is availability of resources and finances, and so they are not really at that time a level playing field with larger businesses to be able to take advantage of the new technologies such as AI, and so in terms of interventions, with he believe that at least there has to be a level of transparency, whether it is on the consumer side, or user side in terms of how they can expand or maintain that choice set, and on the offer side, we he think the interventions are necessary for not just helping owners or small owners in particular how to leverage AI, how to leverage to increase margins and also have the resources to be able to invest in those.

Along those lines for both, I think what we talked about is education, and it's necessary to ask the questions that where are we when it comes to our autonomy of choice.  Thank you, Dr. Zhong.

   >> BU ZHONG:  Thank you, Professor Sharma.  I want to talk to our on-site participants at IGF.  Welcome.  Thank you for coming, and AI ethics is actually a very important topic this year.  We're very honored to invite a panel of seven experts and come from different areas to talk about AI ethics.  The timing is not very good, except you know you folks on site is good, and for us in the eastern time in United States it's very early in the morning, and Professor Sharma and also Lola and Renata you got up so early and I appreciate that.  On the Beijing side it's in the evening and thank you for joining us, and virtually come out of your dinner and to join us.  Anyway, we welcome you for those on site there and you have more advantage.  The next is Lola Xie doctoral student at.

   >> PENELOPE O'LEARY:  University in United States and her title is algorithm transparency influence people's use of social media for health information.  Please.

  >> LOLA XIE:  Thank you.  It's an honor to be here.  Hi, everyone.  My name is Lola, a PhD candidate at Penn State University.  I study people's use of information in communication technology for personal health management and especially those with chronic mental health conditions such as opioid addiction, depression, and eating disorders.  So today I'll be briefly sharing some of my experiences working with a very specific form of artificial intelligence which is the algorithm we'll use on social media.  So, in my scientific inquiry one question I always ask is how people use social media to acquire health information they need and how information online can help them to achieve better health outcomes.

So, you have ever tried to search any health-related information online in the past couple of years, you'll be able to see the changes in the way we present and communicate health information online over time.  So, in the past we had to actually proactively search for health information about a specific disease or condition and collect different pieces of information from different websites.  But with the advancement of technology and media technologies today, when you search for a very specific condition online, you will be able to see an overview of the condition telling you the symptoms, the causes, the treatments, and ways that you can seek professional help at just one click.  And moreover, if you search for the condition on a major social media platform, you'll be able to find millions of posts about it from other users and you also find the access to information from credible sources such as WHO, CDC, and NIH in just one second.  And that's the case for COVID, if you just search for COVID-19, on either Twitter, Instagram, or Facebook, you'll be directed to CDC's website and NIH's website about the disease and they'll tell you what to do.

So artificial intelligence in other words an algorithm is behind all of these.  In my specific research area, algorithm also helped communicating very sensitive health messages to people who those messages can be very triggering to them.  So, for example, when I have algorithm that helped detect risky languages used in user's posts in social media regarding eating disorders, so whenever the language glorifies or promotes eating disorders on social media is detected, it trained AI put a flag on the post, evaluate the post, and further make the decision to either take down the post or incorporate a little trigger warning flag in the post for other users.

So, the algorithm will also make pro eating disorder recovery content and help seeking more information more visible to other users while making pro-eating disorder content less visible on the platform.  While this type of algorithm can actually largely help combat misinformation and reduce pro-eating disorder contents, it also resists some technological and ethical concerns over the years.

So, one issue is about the accuracy of the algorithm that we're using today.  So past research has suggested that there is a variable between pro-eating disorder content and pro-eating disorder recovery content on social media, making it extremely difficult for AI to differentiate recovery content from pro-eating disorder content, and also in our society at the same time, major social media platforms, they're very slow and reluctant to adopt a new algorithm to help detect these kind of health risks and languages on social media, and because pro-eating disorder content, pro dieting culture content can actually bring more traffic and profit at the end of the day.

So, this often resulted in pro-recovery content being falsely flagged and deleted while pro-eating disorder content stays under the spotlight.  So in a recent research project that I did with my colleague from the University of British Colombia, we actually interviewed social media content creators that post their eating disorder recovery online, and what we realize from them is that they have a major complaint about how algorithm on social media today often makes recovery content less visible, and while they're also promoting content that perpetuates ideal body image and extreme diets, and it's not just the case of eating disorders but also for many other health-related information online, opioid crisis, COVID-19, and many other health conditions.

So as users, we need to know more about the algorithm deciding what we see and not see on social media, especially when it comes to important health information, and that's why as researchers, we need to not only make our algorithm more accurate in detecting potential risks involved in user-generated content on social media, but also make the process more transparent to the public, letting them know what we're doing to help them to achieve better health outcomes, and at the same time we also need major social media platforms, those big tech companies, to be more transparent about their algorithm and explaining how they prioritize certain information over the others when presenting health information and reconstruct their algorithm to actually serve the public good.  So that's everything I have to say.  Thank you, everyone.

   >> BU ZHONG:  Thank you very much, Lola. To the audience and on site, we have two more speakers and then we'll open our forum with you and go to the Q&A.  Our next speaker is distinguished professor from Zhejiang University around very well-known Internet researcher from China and currently distinguished professor from College of Media and International Culture, Professor Xingdong Fang.

   >> XINGDONG FANG:  (no English translation).

   >> BU ZHONG:  Okay.  Thanks.  Yuanyuan, if you can interpret what he talked to us.

   >> YUANYUAN FAN:  Okay.  I will briefly translate what Dr. Xingdong has said.  Because of the COVID-19 global digital governance enters a brand-new stage directly to every person's life and personal interests, we are pleased to see that in the past two years in the face -- in the face of multi-layered challenges by super platforms, China, the United States, and Europe have achieved a passage understanding in the history of the Internet in terms of antimonopoly and Internet governance, which indicates that in the digital era, there is a greater need for global connectivity and cooperation, and in addition to establishing good communication.  Countries need closer institutional exchange and learnings from each other and co-construction in digital governance.  While the progress has been made in antitrust and the platform governance, the deeper and greater challenges is to build a global consensus on AI ethics and to establish a set of binding and locally interconnected AI governance systems.

The core AI ethical issue is data, the data and basic roots of data usage.  In the past, China has relatively week in terms of personal information and data protection, but it has been catching up in the past few years, where the government, academia and industry have formed good cooperation to promote the situation.  On the one hand, institutional development has been accelerated and on the other hand, learning with an open mind, for example, the EU Digital Marketplace Law is still in, but the core concept of gatekeeper has been implemented into China's personal information protection law, and data security law, which have already been implemented.  In the meantime, China has not only launched the global initiative on data security, but also apply to join CPPP and digital economy partnership agreement.

Over the past 40 years, China has openly learned from America in technology and now China will also openly learn from European institutional innovations.  Institutions are an important public good, and it is necessary to learn from each other and keep interconnected.  As for AI ethics and AI governance, it is crucial for us to closely work together and build new mechanisms from communication consensus to institutional co-construction.  Thank you.

   >> BU ZHONG:  Thank you very much, Yuanyuan, for translating this.  I do share the talking, and really believe that when you use AI ethics fundamentally and we need to pay attention to how to make good use of the data that we have.  The capability of processing the data is extremely essential for us for AI ethics transparent also, and I also share in his talk about we need joint effort, you know, in today's world we really need joint efforts in terms of like and tackling AI ethics, issues, dilemma, and benefits together.  Europe like any east and west, basically, and jointly and work on this.

Next, we're going to our last speaker today, and we actually have a speaker from Brazil from in Bulgaria from United States, obviously from China, and also from Canada, and now we go to Renata Carlos Daou, and she's an international student from Brazil, and her title is how AI can cause policy --

   >> RENATA CARLOS DAOU:  Thank you so much for inviting me.  So, one of the main issues involving the ethics of artificial intelligence involves inequality, so in the way that the world is structured today, our economic system is based on hourly wage, which means that people are paid by the hour to do their job.  But however, as the World Economic Forum points out with the development of artificial intelligence, companies can cut down their workforce and rely on fewer people.  So as has been stated, artificial intelligence has crossed many fields like health care, banking, retail, manufacturing, and can improve efficiency and reduce cost and accelerate research, for example.

So, the companies that have the capital to invest in these AI systems have an advantage compared to companies that rely on the human workforce, so these companies have a kickstart and will be making more money which will highlight inequality.

So, the price of artificial intelligence varies, so as prototype, development starts around $2,000 and the cost of implementing IE solutions can cost up to a million dollars depending on performance and complexity of the software and many other factors.  But that means that not every business, especially those starting with personal funds or small businesses, they cannot afford to make such big investments, so they still rely on human workforce.

So another ethical issue related to artificial intelligence revolves around humanity, and so machines have a hard time detecting people's emotions and they do not possess like human characteristics like empathy, so UNESCO actually proposed a dilemma with artificial intelligence and decision-making, so the example revolved around like an autonomous car and so it was very similar with the food problem -- so if the autonomous car was going full speed in the direction of a child and a senior, and by deviating a little one person could be saved, we would need to rely on the car's algorithm to decide, so the decision would also be linked to another ethical problem like, for example, like racist.  So, the World Economic Forum also mentioned there are some missed marks on how the curriculum can create bias artificial intelligent software, so in a situation like the automated car, this can result in the life-or-death situation, and so when applying artificial intelligence softwares, companies must think beyond like cost.  So as that says, if used correctly, artificial intelligence can, for example, create a bigger pool of job applicants, for example, since it would realize human favoritism.  However, if not applied correctly, those softwares can just replicate the already-existing human biases.  So, when applying artificial intelligence, all of those ethical considerations need to be involved to ensure the biases are not employed.  Yeah.  That's it.  Thank you so much.

   >> BU ZHONG:  Thank you very much, Renata.  I appreciate your sharing about those.  Now, we're ready to open the floor to the on-site and participants here.  I wanted to just spend briefly a minute reminding us that artificial intelligence is ushering a world in which human decisions are made primarily in three ways.  Number one is by humans, which is what we're very familiar with.  The second one is by machines, which has become familiar to us.  And already there is collaboration between human and machine, which is not so familiar with us.  But it never happened before.  So, AI actually promises to transform all aspects of human experiences and is the core of this transformation that will ultimately occur at the philosophic level, by transforming how humans understand reality and our role within the relationship between machine and humans.  So, we know that only very rarely have we encountered a technology like AI challenge our prevailing modes on explaining and ordering the world, so the evolution of AI is affecting human perception, cognition, and interactions what will in AI impact be our concept of humanity, and be our human history, and today about morality?  So, my fundamental question, when we ask these questions is who can teach morality of ethics to machines?  AI researcher, philosophers, big tech companies, our government officials, regulators?  We don't know.  You know, that's why we're here and we're discussing this AI ethics.

Now, I'm open to the floor, and please you know, raise your question.  You know, I don't know how IGF handles this on site there and we don't have anyone on the site, and you know how can we get any questions there from the room?  Anyone from on the site have a question?

   >> YUANYUAN FAN:  Participants with comments, please use the standing mark.

   >> BU ZHONG:  Yes.  Please use the standing mics if you want.  Anyone?  Yes, please.

>> JOSE MICHAUS:  Can you hear me okay?  Okay.  I will say I'm from Mexico.  So first of all, congratulations to all of your interventions, they were really, really fascinating.  I wanted to ask each one of you several different questions, but I think a general question would be kind of like my question is a little bit on, so AI we've talked about on how it impacts the very different sectors and different -- yeah, like economic activities, so I wanted to ask you about how AI is also will increase inequality between countries as more industrialized nations have more ability to develop this technology, and not only that but to make better use, for example, of data.  So how data flows have been where you can take data from one country but other countries have better ability to use it because they have better technology, I mean general development of AI technology will increase eventually even further the inequality between countries as in general, so I wanted to hear from all of you or whoever has a take on this on how do you see this coming in the next years?  Thank you.

   >> BU ZHONG:  That's a very good question, Jose.  Thank you very much.  His question basically says how AI can help like international relations of how AI may help and improve the relations between countries, and I think you also indicate like how AI may like you know help us jointly make better use of data available.

Any of our panel like to address this question?  Professor, Wei?

   >> LU WEI:  Yes.  I think that's a great question.  I don't think that's a new phenomenon because even for old technology like Internet, like satellites, there was a very big and significant global digital divide, so in the era of AI technology, I think we can just see the same global digital divide in the new media technology.  I think maybe there are three solutions for this new global intelligence technology divide.  The first one I think is probably through the international organization, through something like the United Nations or something like the forum we're having right now, so maybe if we can reach a consensus about how this international organization can do something to reduce this kind of global digital divide, I think that would be a very good solution.

The second one would be the country or the multilateral collaboration between different countries, especially those more developed countries if they can supply some technologies and they can supply some human resources to help those in the developed countries to catch up, I think maybe those developing countries then can have a better performance in terms of this AI technology development.

So, finally, I think maybe for those poorer countries or for those underdeveloped countries, they should make more investments in both those hardware and also software, especially by increasing the education around this new technology area, then maybe they can do a good job in reducing the global digital divide.  So that's my ideas.

   >> BU ZHONG:  Thank you very much, Professor Wei.  I hope this helped Jose.  I really share the concept there that the United Nations can play a very important role in facilitating and you know serious discussion about this and the conversations.  That's why we're here at IGF.  Right. 

So, anyone else?  Please jump in if you want to discuss this and even from the audience, we have standing mics there so let us know your questions there.  Any other questions from the audience or online?  Professor Sharma?

   >> AMIT SHARMA:  Yes.  Can I just chime in in response to that question?  I agree there is a divide, and I just want to highlight also that there will be a divide between developed and underdeveloped -- or developing nations, but I think within the nations also, are there will be a divide between the developed areas and the ones that are.  In some of these cases I think also what's going to drive the divide is the development of markets how they are better connected in some areas versus other areas.  Supply chains, for instance, where AI is going to be far more useful in some cases where it's almost going to be a necessity or will fill a huge gap.  Finally, the last thing I want to mention is again that goes to the divide, urban, rural, developed, developing, is education.  And I think that Dr. Wei mentioned that as well, so I just wanted to add that.  Thank you.

   >> BU ZHONG:  Thank you, Professor Sharma.  Any other comments on this?  I would love to remind everyone that AI questions are not so easy to handle.  I really appreciate some serious discussion about the pros and cons and benefits and setbacks that AI may cause us.  It's not always good and it's not always bad.  Yeah.

Those people on the site, any questions on the Internet and with us here?  Does anyone have any questions?  Discussions?

You know, we have a couple of speakers that did not show up today and you know because of different reasons here.

   >> AMIT SHARMA:  Dr. Zhong, can I ask a question?

   >> BU ZHONG:  Please.

   >> AMIT SHARMA:  One of the things that we looked at in our paper on ethics and morality was that it can be defined, and is there one single definition of how we define ethical questions and moral ideas?  And I wanted to ask the panelists and my colleagues and others in the room, what is their experience as they have interacted with other international scholars, that you know what makes the difference in actually defining questions around ethics and morality?

   >> BU ZHONG:  That's a very good question.  You know, anyone can jump in.  Renata, Lola, anyone online or on the site can jump in before I can talk.

I think this is a very good question, professor Sharma.  I do appreciate this.  As society develops their own machine partnership, I believe in the cultural difference can also play a role, too.  It could be operational and moral limits and with respect to AI.  This is definitely the new genre.  AI is obviously taking us to this space so we can no longer stream by the limits of establishing the knowledge.  There are definitely knew challenges ahead of us.  What's bothers me is how to answer the question who teaches the ethics to the machines, and increasingly when the machine makes mistakes, we cannot hold them accountable for the mistakes that they make.  Like auto-car has a crash and has an incident on the highway, we cannot blame, that's Tesla's problem or someone else's problem, and we need to take control of our life back instead of like AI just drive us to somewhere we have never been before.  Yeah.

   >> LOLA XIE:  Yeah.  Can I add a little bit more towards that?  I think even before AI we have this argument of what is ethical and what is not, and sometimes it also goes back to something more philosophical, like do we believe if we weigh the bad -- if we weigh the good outcomes comparing to the potential risk we can cause to humans, will that justify our position to do this thing, especially with AI, for example, if we collect persons private health information but we can create a model that better helped predicts their cardiovascular disease risk, is that something that we need to do, because we kind of invite people's privacy and freedom in terms of data collection and usage, but then on the other hand, we also create this model for them to help them to better achieve health outcomes in their life.  So, I always have this kind of argument of do we always weigh the potential risks against the potential good that we're going to do with AI?  And also, some people may argue that even though you have some good intentions and good outcomes out of your AI usage, you will still cause some harms and we should not do that at all.  And people in that stream of research, they will argue that we're going to stick with rules.  So, in a society, we're just going to make a bunch of rules regarding AI usage and then we're all going to follow those rules no matter what the different outcomes we will have with our AI.  So, I think it's still a very important and controversial in a sense, kind of a question for us to think about in a world with AI, as to whether we want to follow rules for our actions, or do we want to be more relative in terms of deciding what is good in terms of outcomes and how should we weight that against the potential harm that we're going to cause.

   >> BU ZHONG:  Thank you, Lola.  I really appreciate that.  I do see like on the side and if someone were to try to approach our standing mic, and please do.  Thank you very much.  We can hear you, I hope.

   >> AUDIENCE MEMBER:  Thank you.  The same question I actually wanted to put my thoughts there.  My name is Asim, and when we talk about the ethics, we have to first go into the core of how the ethics are derived.  So, ethics are the core values which come either from the society, culture, or religion.  There are some basics of ethics.  So even while addressing the AI ethics, we have to see that we can put these ethics into two big containers, I might say.  One container can be that there are universal ethics, which are universally globally adopted by all the societies and cultures.  And then there are regional ethics, so I think we have to first figure out that which are those and then we have to feed them to some AI.  Thank you.

   >> BU ZHONG:  Very good.  Thank you.  I really appreciate the manner you used the two containers and to keep the two containers in mind when we approach AI.  How are we going to manage that, and maybe like we have even multiple containers and not just universal acceptance, rules, things here and something new that we never knew before.  As we know then with AI is so extremely pervasive, our societies are, and I really appreciate that with containers.  Thank you.  Any comments from the panel?

   >> AMIT SHARMA:  Dr. Zhong I think Renata has her hand up and then I can make a comment.

   >> RENATA CARLOS DAOU:  I had a comment but it wasn't about this conversation.  It was going to be like another question, so if you need to finish this topic, I can just jump in later.

   >> AMIT SHARMA:  I'll just make a very quick comment.  I appreciate Mr. Asim's comment, and I just want to point out actually what Renata said earlier, that if AI is being driven by efficiencies, which I pointed out as well.  And then could there be this, as you know just reference to the containers of could there be a drive from let's say the larger companies, which have a greater advantage or greater sort of benefit out of AI, so while I absolutely agree that there are these universal ideas, I think the universal ideas can also be biased by the bigger players that have a greater benefit to derive out of the technologies.  Thank you.

   >> BU ZHONG:  Okay.  I just want to follow up very quickly since we have time a little bit.  Last year we proposed a new model called FAT model to talk about the AI acceptance and we were talking about the user experience, and F means fearless and A means accountability, and T is transparency, as you know.  But this year you know and we're going to write a new paper and really propose an FEAT model, and as you know we were talking about ethical dimensions of that and while working on this, and we'll allow to share immediately with the community as soon as the work is published somewhere.  Any other questions while we were discussing AI ethics for those people who just came on and discuss AI ethics and Internet governance and global experience here.

   >> RENATA CARLOS DAOU:  Yeah.  So, like my question was I feel like it's more related to Lola's speech because her speech actually reminded me of like the news that came around October of the Facebook files and how the algorithm was like kind of like purposely pushing body ideas to young teenagers and Facebook knew about it and it was part of the algorithm.  So, this is not something that it's like illegal, but I feel like it's more morally wrong so in a situation like this where they're pushing like content through the algorithm and purposely making people feel bad about their body and creating health issues of how to do in situations like this.

   >> BU ZHONG:  Yeah.  Anyone that can talk about that?  You know, this is like a very interesting for many years.  It's difficult with AI and AI ethics, there is a lot of decision making that happens behind those algorithms we're using, or not transparency.  We lack knowledge to understand on how those things are opening up.  While I just want to report to you, and you know the United States, there is some new trend and I just know this like that.  For example, YouTube said the public now could watch anyone younger than 18 recorded video.  They will not automatically add to the YouTube.  In Tik Tok in the United States says it will stop sending app notification to teenagers at night in the evening.  And Facebook and Google strongly restricted the way advertisers can trigger messages to minors on Google and those things.

So suddenly all of the big tech companies are sort of beginning to protect kids there now.  I believe that's come from the pressure of us, the Civil Society is really holding them accountable, like moments ago Renata mentioning about the Facebook file and the Congress hearing and that happened in the United States.  I don't think that's enough though.  We cannot rely on big tech to just do their own things and oh, we'll protect kids and people.  I like having Civil Society and other stakeholders to get on this topic, like an IGF organization you know AI ethics discussion about and hold them accountable and improve our own AI literacy regarding how we can detect those harms and benefits coming to us.  Okay.  Five more minutes, Lola, please.

   >> LOLA XIE:  Yeah.  Can I add a little bit on that?  I completely agree with what Dr. Zhong just said that it's not just the company's responsibility to prevent things like that from happening again.  And on the one hand, we need more regulations on tech companies to show us how they decide what to put in the algorithm and what they decide not to, so to be more transparent in the way they're using data so that we can know what helps them to make their decisions.  And on the other hand, I also think that we as users and scholars, we need to -- I think Dr. Wei talked about this in his speech about to have more media literacy interventions so that people will know more about how social media works and how we can use different tools to protect us during the social media use.

And I think a lot of companies like Dr. Zhong just said are ahead of others in dealing with this kind of issue.  So, I specifically research and work with eating disorder patients, and so if you search eating disorder on Tik Tok today that you won't be able to see any content.  So, it would directly lead you to the national association of eating disorders, and NIH page on eating disorder.  But other companies are a little slow and behind that.  If you search on YouTube, Twitter, Instagram, you will still be able to see those contents, but I think that that is a really good step for Tik Tok to take and something that all tech companies should be looking up to and incorporate that into their social media strategies and algorithm in the future.

   >> BU ZHONG:  Thank you.  Lola.  I appreciate this.  So far, we don't have any big tech issues and I really appreciate that, and IGF provides this for us.  For those people on site there and you know our standing mic is over there, so does anyone want to jump in and join us for some discussion.  We have a few minutes reserved for you folks here.  Please.  Thank you.  Anyone on the site?  All right.

How about us, you know, each of us maybe give wrapping up like a sentence to wrap up today's wonderful discussion about the AI.  I would like to thank IGF again for this opportunity.  Anyone want to give just a one sentence or two wrap-up?  How about we go to Renata first, you know, you were the last to talk and maybe now you go first and we wrap it up.

   >> RENATA CARLOS DAOU:  Thank you so much for the discussion because I definitely learned a lot about like topics that I wasn't really familiar about 6789 so, yeah, I just wanted to say thank you for the invite and it was great.

   >> BU ZHONG:  Thank you.  Okay.  Then we'll go to Lola then.

   >> LOLA XIE:  It is a great honor to be here and I learned a lot from all of our speakers, and I think we're discussing something that is really important in our society now days and I hope beyond this panel, beyond our discussion today, we'll have more discussion outside the panel and to push forward more ethical research on artificial intelligence.

   >> BU ZHONG:  Okay.  Next, we go to Professor Sharma.

   >> AMIT SHARMA:  Thank you, Dr. Zhong.  We got to keep talking about this, more awareness, more education, we got to keep asking more questions.  These discussions are very relevant and timely.  Thank you.

   >> BU ZHONG:  Okay.  Thank you.  Professor Fang?

   >> XINGDONG FANG:  (no English translation).

   >> BU ZHONG:  He said we did not attend in-person IGF for two years now, and we really want to jump back.  Me too.  I really share that.  Professor Wei?  Oh, I missed him.  I want to jump in on AI and new solutions and there for any human learning.  I like to focus not just on harms or benefits of AI, but we want to take AI like coherently.  I don't think that's just any technology that would just bring us benefits or just bring us harms.  There is a lot of things going on.  But indeed, AI is not like any other technology like machine learning because they are devising some solutions beyond the scope of human imagination, which is very amazing, you know, AI and you know the age is coming to us and I think that's fantastic there.  Actually, you know what, we almost hit here now, Christian, and you're just joining do you have anything to talk to us about, you can unmute yourself, please.  Go ahead.  Christian?  Okay.  We cannot hear you.

   >> LU WEI:  I'm sorry, I was disconnected.

   >> BU ZHONG:  All right.  That's fine.  You give us.  Go ahead.

   >> LU WEI:  I think that's a great panel.  I think AI ethics and Internet governance is a very complex issue, and it's a very big social system.  I think we need everybody to take part in this great process, the governments, the big companies, and media organizations, international organizations, and most importantly, the general public, the individuals, and everybody including us, we need to play a role in this process.  Thank you.

   >> BU ZHONG:  Great.  Thank you, Professor Wei.  Christian, everybody can listen to you before we wrap it up, can you talk to us and do you have some words there?  Looks like a little bit of frozen to me.  I don't know.  Yeah, you're good.  Okay.  Unmute yourself, please.  Christian Nzhie.

>> CHRISTIAN NZHIE:  Yes, can you hear me.

   >> BU ZHONG:  We can hear you now.  Yeah.

>> CHRISTIAN NZHIE:  Yes.  Thank you.  Thank you very much.  So, as I was saying, I am just here to provide technical support for the session.  I don't have much to say having listened to the conversation.

   >> BU ZHONG:  Okay.  Good.  Yeah.  Thank you.  I appreciate this.  I do think this is a wonderful start of the conversation about this and we definitely need more.  I hope next year, you know, we'll meet in person and shake hands and hugs.  You know, even on-site food is not always good but that's not most important part and I hope we meet in person next year.  I thank those people who have come to the conference and attending this.  I appreciate your participation.  I hope like you know we all learn from each other through the platform of IGF.  Okay.  Good morning, good evening, good noon and have lunch, all of those things here and I wish to see you and everybody next year.  Okay.  Bye.