IGF 2018 - Day 2 - Salle VII - WS #11 AI Ethics: privacy, transparency and knowledge construction

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Hello. Hello. Can everybody be quiet a little bit? We have a sixth speaker. Can you sit together a little bit so we can have a speaker here. It's possible? We have a sixth. Can you move? Sorry about that. Thank you very much.

Finish up. Wonderful. This is yours? Is it possible you can move there because we are speakers?  Thank you very much. Sorry again. We need two more seast. Can you move to the back? We need two more seats. Move here. Here is one chair. Yep. You can pick this one.

 Do you have a connector to the projector? I got it. Don't worry. I need projection. Okay. Okay. You want the other one? That will be an easier one. Let me put in the USB.

We can put in over there, right? 

>> Sorry for the system. It's not possible.

                Let me put it into the USB anyway.

Sorry for that, because we need to put all the file together and then we can start it. I put it into a USB and then they can do it. That's okay. Go from this way all the way is fine.

Give me that list then, I can do it. Okay.

10 minutes.

[ Laughter ]

>> We don't have a counter.

>> KUO‑WEI WU:  Okay. Thank you very much for coming into this AI ethic and organized by many of the people. It surprised me, we have a sixth speaker. Basically the rundown will be I will spend three to five minutes to introduce each one. Then, we will give each speaker three to five minutes to talk and they promised me they will be finishing in 10 minutes and then we will have the dialogue between the speaker and panelist. Following that we have 20 minutes for follow‑up and anybody can ask any question. If you'd like to stay even later, and this room is available, we can do that.

                Okay. First of all, let me introduce myself. I am Dr. KUO‑WEI WU: And I live in Taipei. I work for six years and Board member for 12 years.

                Today, we have a sixth speaker. The organizer, Dr. Chin from Xian Liverpool University in China and Changfeng from the Tsinghua University. Maybe you can explain to us. And Ansgar Koene, senior researcher in Nottingham. And Yuqiang Chen, in a startup company. And we have another person on the back is doing online moderator. Can you raise your hand? That's good.

                Okay. Now, I think it's done. I can do that. Kind of nervous. Okay. I put in the USB for the presentation.

Okay. Okay. That's good. Dr. Chin, you can start it. She is the organizer. Keep to your minutes, please.

>> Can we minimize that screen, please?

>> YIK CHAN CHIN:  Okay, yes, today the organizer after workshop and also that's why I am the first speaker because I would like to set the scene for the workshop. In this workshop we want to discuss the AI from different sector, including civil society, industry and academias.

                Okay. First of all, I'd like to introduce myself and Dr. Chen from the Tsinghua University. Next slide, please. 

First of all, I'd like to discuss the general issues of AI. As many know that, general ethics concerned with I such as human deity, privacy, employment, the divide and the singlarity and also the usual things of AI. I can't spend too much time on this issue because we are supposed to show the audience aspects of these areas.

                Next, please. Discuss with the audience. 

Now, I'd like to talk about the different AI research and development in the world. First of all, I would like to talk about in different countries, for example in Europe, for example in Europe, look most of them the development is about the European unions look at the AI ethics. For example in Germany, look at the cloud computing. If we look at the policy on AI, more about manufacturing intellectual intelligent agricultural and logistics. The Chinese government put up one pass and import it. It means we have a size, natural size, foundation, to support the development of AI technology in China.

It means industry. We can look at agricultural, and health and other city management. This is a look at the policy and development of AI in China.

It started in 2014, there's guidance and dividing internet of things. Start with I.T. and then the economic sociology and five year plan. The crucial policies come in 2017 which is called the new generation of AI development planning. This is crucial. The main thing is to look at regulation and norms to AI and applications, such as service of robots.

Next, please. Look at the bait in China. There's ethical principles and methods of AI. The ethical principles proposed, not by the government but the industries and NGOs. The ethical principles like human centered and justice and openness and transparency and informed consent and the responsibility for example who should take responsibility for the liability for the algorithm.

They also proposed concrete methods of ethics of AI, like construction of moral agents that can control the ethics of AI. Such as an ethical committee. Look ethics, building up is not a one oath process since. It has to be a process, a longer process. Next, please.

A special aspect of China's AI ethical issues, we have to take cultural contexts into consideration. The committee is different and different cultures and may play a path of ethics in AI, different between the Chinese ethics and the rest.

                Second, in China, the research on AI ethics started of late. Behind the scenes approach right now. Other things Chinese people lack is a deep understanding of ethical issues, such as privacy, autonomies or deity. These are issues.

We have very strong development applications, official recognition and language translation. We are weak in the ethical code and regulations. At the moment, the industry initiatives, to do some initial work to regulate like application of AI in different sectors.

Next, I will pass the time to my co‑speaker and she will look at education of AI from the different sectors. Professor Chen from Tsinghua University.

>> CHEN CHENFENG: I spoke on specific ethic detection by AI. Normally we have two models. One is the content model based on everything. The other is a social context based on algorithm. These two models in Chinese practice new media. Also, I think worldwide there have those technology, their packet and method, and I want to discuss the ethicacy of these two models.

                Next. The first model is the content model, algorithm. This model helps to recognize fake news. The single application, circumvents a number of special issues in journalism.

The tradition of deception detection, the definition of deception, first conscious deception to send the information. This, some risk of this model, the practice of defined deception based on subjective intentions, force news caused by analogies and deception.

I think I haven't the time to explain this. This model really has the risk of ethics. Another model next, please, quickly. Yeah. This is based on social context algorithm. Those are the models. They put it in some materials and classified into different news. One is fake news, one is real news. But this ethics risks for the social context model.

This is on the slides. I'm sorry, we haven't the time. The last ones perhaps, yeah, we have some discussions for the algorithm models. Just listed on the slides. I'm sorry, we haven't the time.

                Thank you

>> KUO‑WEI WU:  Thank you very much. I think the next speaker would be Ansgar, sorry, yes, Ansgar.

>> ANSGAR KOENE: While the slides are being put up. I'm a senior researcher at the University of Nottingham, Ansgar Koene. I chair the standards working group, developing a standard for algorithmic bias considerations. I'm mostly talking about the point of view of standards organizations, why do professional organizations have an important role in moving towards regulation in this space or helping with setting of ethics approaches towards tackling some of the ethics questions.

Next slide, please. So this is just to set the general scene, why there's been so much interest, to try to look at how to deal with ethics questions, and why there's a push towards the need for introducing some kind of regulatory response.

                It's basically this that we've had, almost on a weekly basis, stories of something involving AI, algorithmic decision‑making or similar things going wrong. Stories about fake news. One of them and algorithms taking down the wrong kind of content. Stories about manipulation people are receiving from news feeds and autonomous vehicles that malfunction and crash with fatal consequences, and many more.

Next slide, please.

So one of the things the immediate response we see in the political sphere, this triggers countless parliamentary inquiries. In the UK we had eight related one way or another to AI algorithmic decision‑making or other inquiries basically in this space.

Frequently the outcome is it seems something needs to be done.  So we don't know what the exact issues are, so it is potentially too early to move forward with something specific to do even though there is a strong sense something needs to be done.

We see something similar being done at the European Union level with the European Commission having established a high level expert group on Artificial Intelligence, with the aim to develop clear ethical guidelines by the end of, is it this year or next year? And the European Parliament putting through requests for assessment reports, and so forth. Similar things in India and Singapore, for instance, and many other countries as well.

Basically, we see many governments expressing the sense there needs to be some kind of regulation in this space but having uncertainty as to what to do. Part of that uncertainty has to do with a technical issue of this, talking about a new kind of technology not well understood, especially not by the people who tend to be in this kind of policy‑making space.

                Next slide, please. One before.

This is one of the areas where technical communities such as the IEEE and other communities have a role to play. In 2016, the IEEE elevened ethics on autonomous ‑‑ launched ethics on autonomous systems. The idea is this is a space where there needs to be clear advice what are the actual features of this technology, where are the concerns that arise?

                For instance, the frequent issues that get raised about capacities and deep machine‑learning is very popular and not transparent how the decisions are reached. Is this truly the case, if you look at it from a technology perspective, is this something that technologically can be addressed to a certain extent? Those are the questions that trigger the start of this initiative.

So the ACM came out with a list of principles, which is a good initial stage, however, the IEEE considered this doesn't go deep enough. A list of principles such as algorithmic decision‑making should be transparent. There should be a possibility for redress. Yes, this is the principle you want but how do you do that? On one hand, the document, Ethically Aligned Design, now in its third and final version.

It will be published the end of this year and it's freely available online. Which goes into these issues in more depth. Basically instead of 184 of principles, you get 300 pages, you're not supposed to read all of them but focus on sections relevant to your work. Sections on facial recognition and applications and what are the ethical issues there, and what are the potential ways in which those can be approached.

Another important aspect to it is the development of standards, industry standards. At the moment, the global initiative has 13 standards under development. I will not go through all of them here. They cover things such as transparency of autonomous systems, state of privacy, and the one I'm chairing, Algorithmic Bias Considerations, for instance.

                Next slide, please.

So, I will just talk a little bit more about the P7003 standard for algorithmic bias considerations. The idea behind this is to develop a clear framework around when you're developing a system, what are the kinds of questions you should be asking yourself. Which stage is the development process, should you be asking these questions, is the data set up for validating or testing or training the system? Is it representative of the population that will be affected by it? Those kinds of things. It is being developed in a multi‑stage approach.

I took these, asked in everyone participates an an individual. They have an affiliation but don't have a different voice because of that affiliation. Everyone's vote is equal. And our group is currently approximately 40% academic, 30% industry, and 30% civil society. I'll leave it at that for now.

>> KUO‑WEI WU:  Thank you. The next one would be Wang Shu.                                Can you bring the file up? Okay.

>> WANG SHU:  We are the biggest social media now. And I am going to talk internet rumors. Everybody knows social media is changing the world. In China there are more than 1 billion active users on social media. The social media ecosystem also has many dangers. I think one problem is proliferation of fake news.

Social media, according to the service, 50% according to analysis difficult to judge the truth of the news in social media.

                The second is the needs of the new shape from the effect of the industry preference, news and fake news are more popular at some point.    Why the news may become less important, what we call is the post truth era.

                And the picture, voice and video production technologies have made it's easier for people to wrap rumors through editing.

                Next. Okay. I show a picture to you. This is a picture from the popular news machine. I guess you already know where the pointer is.

                Next, whether in the East or West, there is a proverb called the rumor ends in the wise. Many countries and platforms have only begin mechanisms with that process.

                However, can we see rumor ends in the wise? In my opinion, the answer is definitely no. It's the most effaced ethics of the net. Rumors are far greater than rumors in public or television. It's linked to also crime. Facing the problem.

                Next. All country activity exploring solutions. At the world internet conference, the CEO from PR‑Newswire stressed that false news is a global issue and the authenticity is more important than journalism.                                                   

                Germany first introduced the social media management law January this year, inquired that the social network must clear the illegal contents reported by users within seven days.

Next, let's talk about how Weibo performs. As the largest social network in China Weibo has more than 200 million active users everyday and 2 million companies, and receive revenue through this platform. However, it cannot be denied there are also fake news and rumors on Weibo.

Weibo actually launched to prevent rumors, yes, this page. We are refuting rumors and the fake topic of connecting daily rumors. And pushing information of referring rumors to the most immediate user, based on the interest. And the topic is close to 5 billion.

Next, we should discover similarly to connect the information of refuting rumors is not enough. So, we encourage rumors be reported and after reviewing, provide it to the user who reported this. So all the rumors are restricted.

Next. Since the 2017, Weibo launched a protection, highly creatable media. Sorry. The one before. One before. Yes, one before. Okay. Yes. We launched that label products have highly creatable media. If they found one message on Weibo is a rumor, they can put an explanation. This message will not be deleted but buried on the rumor. You can see it. It's become the nature messenger for refuting rumors.

Next. And it is an effective policy to manage by creating a system. Because someone buries the rumors, they don't understand that. And we launched the user credit system. We deduct, is the goal. For example Weibo deducted 16 points and people will be restricted to the red part of the content. It is detected below 50 points, they will be restricted to speak.

Copyright protection is a good way to restrict information and this is the cloud editing platform created by Weibo. It has 400 million video copyright and more than 100 countries. Weibo Cloud supports copyright and make sure it is safe media and maintains the rights and filters information.

I saw interesting news that Facebook set up a fake news team to strike fake news and women have more advantage in this regard. I don't know if that's true because the female and male is banned on Weibo. We want to say internet is an international form that contents everywhere.

The network rumors and damage they bring are also crossed. Managing fake news on social media is responsibility of all the countries.

I want to say that as wise men in the internet age, we should make rumors stop at a rational system, stop at an effective platform while end in our cooperation.

                Thank you.

>> KUO‑WEI WU:  The next one, can you find a file?

>> AUDIENCE: Are you going to take questions at the end?

>> KUO‑WEI WU: When all the speakers finish we open the questions. We have 20 minutes for everyone. Don't worry about it. Yeah. I think the 20 minutes to the floor for asking questions.

>> FELICIEN VALLET:  Okay, everyone. Hi. I'm Felicien Vallet, working on reports how humans can keep the upper hand and ethical matters raised by Artificial Intelligence.

                Just there. That's it.

So, in a couple of words for those who don't know, it is the French data prediction authority, created in 1978. This is where I work, just across the street, CNIL, just across the street, it's to make sure the data privacy is protected for French and French citizens.

As you may know, since May of 2018, last May, we now reference a framework GDPR, general production of regulation, the common framework used over Europe. That's the main mission. As a side mission, CNIL has a new assignment in 2016. The idea is to lead a reflection on ethical issues raised by new technologies. For the reasons previously exposed, AI was a perfect fit for the consideration of this assignment.

And, so, what was decided in January 2017, was to lead a wide debate, wide reflection in public within the public, so within about in the span of about a year, 3,000 people were auditioned, about 60 even square launched all over Europe organized by universities, professionals' considerations, administrations, companies and so on. On a wide variety of topics of security, education, culture, et cetera.

The result was this report, published at the end of December, 2017, a bit less than a year ago, that you can find on the internet and also in CNIL's presentation booth just upstairs, if you get one. Be free to go.

I will expose basically the idea about it. More recently, I wanted to add these ideas were also used in the ICP resolutions, those not familiar with privacy, international confronts that privacy and prediction and privacy commissioners. It was the gathering of all data prediction authorities around the world. So a resolution and ethics of AI a couple weeks ago that takes this principle.

Next slide, please.

So, very quickly, because we're short on time, in this report we identify four big families of concern. The first one is about, will we build autonomous machines a threat to free will and the idea of responsibility. The question is how to make sure you keep in charge and how to avoid what we call the term meaning to actually put too much trust into the systems, AI systems actually not perfect.

The second big concern that has been identified is the one about biases and discrimination and exclusion. The idea behind this concept is we ‑‑ how can we detect the full effects encapsulated in AI algorithm, sometimes system designers knowing about it. How can we build the framework for these models? 

We say AI is a very powerful tool but also, as we can put it, encapsulated opinions, through learning data, decision parameters, et cetera. We get more and more personalized services. We have to think about what it means in terms of collectivity and community, meaning that, well, we are obviously more targeted for whatever reasons, and the usual assumptions that a named individual is the sum of his or her data.

Obviously, it seems if we think this way, we lose something. We risk things. As an example, 60% of millennials use social media to inform themselves. If we get only targeted information, we might lose something as a society, as a community, and lose control.

The last concern is about preventing must see, AI and privacy that prediction authority, about how to regulate the use of the data AI needs very vast amounts.

                Next slide, please.

I will do fast. Which answers ‑‑ next one, please. This reports more about humanist point of news, not really about technical solutions but what we found interesting, we have two really strong founding principles, we thought should be respected. The first one is about principle of fairness, meaning it's for the systems we use and will use in the future.

                So the fact they should be in only the interest of users, they should be built in this interest and also for users in general, as citizens, as communities and apparent and so on. This means they should say what they do and do what they say the proper way. The founding principle about vigilance means we have to fight against the confidence in this system and how we get decisions and still make room for an AI marathon in these systems. When we have several recommendations. Six of them, I think we can go on and we can talk about it later on. Thank you.

>> KUO‑WEI WU:  Thank you very much. If you have any questions, please try to list it. When we open, we can start it.

The next speaker would be Jake from the Google, please.

>> JAKE LUCCHI:  Thank you very much, the organizers for inviting me. While we're getting the slides up I thought I would start by opening, talking a little why AI is important to Google and spend a lot of time thinking about from the AI perspective.

                The reason we spent so much time doing that, AI has become very central to almost all the things we do, both from the product perspective. AI is now built into most of our major products and a lot of improvements we've seen in those products over time. 

                If you want to skip to the third slide, I will go through these quickly in the interest of time. Think of an example to that.

Google translates a product we had for a very long time and very bad or still bad as a professor was discussing yesterday. Maybe it's still bad.

Can you skip to the third slide?

It's much better than it was before. The reason is we introduced machine‑learning, neuromets a few years ago. If you look on slide four?

It's not working? Okay. It's not huge. I can talk people through it so we don't waste too much time.

                Basically, what you would see if you were looking at slide four is a translation incomprehensible, a little laughable and with machine translation difficult to distinguish between learning from human translation. That, to us, is what we see as being really central to our mission, to make it all universally successful and useful, and think we can do that using machine‑learning.

From a product perspective if we were looking at slide five, we see AI to being a solution to solve being problems on scale. What you would see on slides five, six and seven, healthcare, shortage of doctors being a huge barrier to expanding healthcare around the world.

One place we saw that was India that has a shortage of 30,000 eye doctors. A disease called diabetic retinopathy is digression of blindness, and if you treat it early on it is a curable disease. The problem is because of the lack of eye doctors it was difficult to do those diagnoses, so many people were permanently blind.

We collaborated with a research hospital in India and did basic photos of people's eyes and diagnosing diabetic retinopathy at scale and introduced it around the region. We see the technology to be used for socially beneficial things, one among many.

At the same time we saw a lot of risks and challenges with the technology many identified already. Bias and fairness were ones we were concerned about. We realized we weren't very diverse with the people developing the technology. We didn't have gender balance, for example, didn't have a good racial balance.

You think about creating machine‑learning systems and machine models on is how effective and diverse people are, how they're labeled, and labeled those data sets and had a huge problem with bias.

How do you guarantee privacy even when it is anomolized, you have different groups machines are performing. How do we make sure we're building privacy into those approaches.

We started out with problems early on, mostly research, that Google does best. We started research in two of the problems you see down here on machine‑learning fairness. One of the things we realized we had to contend with, what does fairness mean talking about bias? Should fairness mean guaranteeing equality for all groups, going beyond what humans are already doing that doesn't promote what humans are currently interacting or reflecting the value humans currently have?

                These are issues, and philosophers are debating that. We started partnering with research of people of different backgrounds for the problems.

We also started developing tools researchers can use to tackle problems at a practical level.  And one example of those, the slide I would have if I had a slide 10 here, some tools the team at Google we call the people plus AI research initiative, brings together researchers around the country lacking at these issues and create a tool called facets, basically a way of training machine‑learning based on particular feature values. You can see our future values overrepresented or underrepresented, particular outliers, data points mislabeled. 

                And when it's user‑friendly, you can identify problems leading to bias or fairness problems. Those are tools we are developing. We're gradually evolving our thinking a bit, doing research and creating tools. We recognize we're a really big company and lots of diversity within Google how to approach these problems. We need a lot of great people working on the problems but need a shared ethical code that outlines what we as a company stand for, and give us a shared normative framework to give us potential problems and how we address those.

About three or four months ago we launched Google AI principles which would have been at the bottom there, and those are basically outlining things we want to actively promote with the use of our technology and also drawing red lines of things we would not support our technology used for. Not only by us, not allow third party technologies for those things, things like lethal weapons, things used for mass surveillance, used for international norms, things like that listed out there.

This would be nice to show on the screen. So, you can see all of them. I can read them out quickly here.

Yes. Ones we're trying to promote are socially beneficial only when social benefits outweigh risks. Two, avoiding, creating or reinforcing unfair biased, we discussed, being built and tested for safety. Being accountable to people builds notions around explainability and intelligibility. Incorporating design principles, scientific excellence and made available for uses with these core principles, making sure more people can use the technology, and expanding and democratizing it. At the bottom, red line, weapons, surveillance and violating international human law and rights.

Those are things we now use with products and look at the principles product review process and share with the folks working on AI. We have an advisory group we are starting to divide with philosophers and society and new things we are rolling out. It's hot off the press stuff we announced a few months ago. We're thinking through that. We're collaborating with civil society and academia and we welcome views from those in the room and beyond, happy to connect with you for feedback as we're operationalizing this.

Thank you. 

                >> KUO‑WEI WU:  We have Yuqiang Chen, yes.

>> We apologize for the previous screen. We cannot load it. Sorry.

>> No worries.

       >> This is an old system. 

>> KUO‑WEI WU:  Thank you.

                I want to thank you for the presentations.

                I had a question on the system for rumors. The first is how do you identify a rumor? What do you consider a rumor to be? Thank you.

>> Thank you. This is directed at you. I'm curious if you're engaged with the IEEE standards and the Joint Committee Commission on Artificial Intelligence, if your companies have been involved in that and your own standards as well. Broadly, it sounds like you're doing work internal. What are your thoughts about spreading that work across the ecosystem and role of standards?

Thank you.

Any more questions? One more?

                Go ahead.

>> YUQIANG CHEN: From the 4th Paradigm, to talk how AI will serve humans better in the future. For that, we have AI that can do everything we want. So we need to know how AI works.

>> KUO‑WEI WU:  Next. Nice.

>> So if you want to do that, the first thing is the source of the knowledge that AI is not based on human access, based on the data. What kind of data are you feeding? What kind of intelligence?

                The next one is object. The machine cannot understand what we humans do. We cannot say a command. No. They just understand the mathematics. We have to translate into an object which is a mathematical function.

                Then, we need features. We need features that terminology in AI and learning and in the public you can understand features as the viewpoint.

We need to solve problems and different viewpoints the machine can analyze a problem. The first one we need is algorithm, the thing most scientists put most of time on it but is the least thing a public need to know.                 Balancing the public need to know is algorithm is based on statistics of mathematics. What is statistics?  Some say costs. The majority wins, that's statistics.

This is the fundamental part of learning and modern AI.

                Next slide. There's three important characteristics of AI. The first is historical data to improve the future. You can do a public experiment to see it. You give some food and you give punishment.

                For this system, to some extent, it's similar. You want a good recommendation. You have some activities and get rewards and response people like or do not like.

So, machine‑learning is good at making repeated decisions that happened in the past and easy to predict the future. But, if you want to predict a black swan event, you count from the fundamental because it's just based on the positive.

Statistic machine‑learning is usually obeyed the majority of rules in the data. You often see the most common things, hearing data.

These characteristics is possible to lead to the majority problem. The problem is what kind of data you put in the system, this matters most. How you inspect the data, it also matters.

                Another this is AI makes a value judgment. Those value functions of those objects by a human and note the machine itself. The machine doesn't know what is good and bad. Next slide.

>> YUQIANG CHEN:  To reoccur, how AI can serve human better, the first thing is protect privacy using historical data. We know we want to‑blur it out but blurring data can cause degraded performance, so it can preserve the privacy of each one, at the same time, while remaining a high level of AI performance.

Another is a more gentle way. We need the other data to help the learning system AI system to boot. You cannot monitor your own data for an intelligence system. You need the person like you to boot up your own system. We need data but we don't want to leak the information, so privacy. So we need another way of transferring knowledge to another but not leak the privacy.

The other is majority. Two ways, collect the data and collect algorithms. Data is the most important thing, privacy set, the most important thing in the AI system so we need to collect it, why else are we wrong for the region.

The next thing is that we cannot make judgment by superficial aspects. Human makes judgments by feel aspects because human can only have the ability to read only three or four aspects at a time, before ‑‑ and if you read 10 or a thousand aspects, you cannot make good decisions, so human always find the most imposing, Newton's laws has three laws. Newton does not invent a thousand rules.

But for machine, machine can read many rules. Machine does not tout. We need to put in more data and aspects for analysis in order to make the machine can find the true reason but not just superficial aspects.

Another very important aspects of AI is that we need more high diverse and lower algorithms, lots of ecosystems, for example if you are social media, you have publishers and consumers who read news.

This is an ecosystem we do not want to order readers to read the one news. Perhaps the total consumption is the same but the social WiFi is not good if only one person is producing the news, it needs to be high end. We cannot make it long term. So, a very important goal for AI, everyone can interact, no justice majority can interact.

                Next slide.

Okay. So, yeah, we should prevent AI from doing evil.  But AI itself cannot do this thing. We need more regulation. AI is becoming more and more complicated. We know that.

                Algorithm is back in 2015‑2016. But the parameters is large, 1 million or 1 trillion features, and humans cannot read those features and models of good or bad. We need to utilize AI to regulate it and need more complicated models to recognize those models.

Another important thing we need more regulations collecting ethical data. That is much more similar to the area we can collect and sort the data. That's my point.

>> KUO‑WEI WU:  Thank you very much. We have two questions. Anybody want to ask one more question so we can ask the speaker to respond to you?

Go ahead, please.

>> This question is sort of for the whole group. Nice to see a group that includes several Chinese scholars and business people as well as international representatives, I wonder if people know any specific Chinese or just Chinese initiatives to take on AI ethics? I know there's Chinese representation in IEEE process and Webou have just joined.

>> KUO‑WEI WU:  Any questions?

>> The industry and is it from different sectors?

>> YUQIANG CHEN:  Pretty open because there are pretty well established ethic is in AI and the Chinese is important but don't know much about it.

                In my presentation, as I said, the government is trying to encourage ‑‑ actually encourage to establish the ethical code, but it suggests the four and everything is in progress, and the ethical code has been established in China. There's' debate between the industry and academias and government NGO. This is in the middle of the information stage.

As you just mentioned, the other company joins international initiatives so it is at a crucial stage because China has a formal 18 ethical code and different proposals from different sectors.

>> KUO‑WEI WU:  Anyone want to answer?

>> This is a big question. To share my opinion is actually, there is no ‑‑ the same standard to crush rumors and fake news. In the social media, there are more and more activity, and actually, this is only the small things, little things were to change in a very short time. You know to create mechanisms to refute the rumors, they spend a lot of money, time and employers. Actually, more and more people to ask you to do something, to do it, therefore, it's more than just to do for our children.

So, I can share opinion from the international world conference. In the past few years, there are boons and all the generalists ask each other how can we get more details?  Nowadays, journalists ask each other, is it true? Is it true? So, I think is the change.

Thank you.

>> KUO‑WEI WU:  We have two speakers going to respond and then go to you. You first, then Jake? 

>> I want to pick up quickly on the question of the involvement of the companies such as Google and Weibo in activities. I do know in 2003, the one meeting we had participants affiliated with Google and other large tech companies. They are affiliated with Google. They do not speak for Google and do not represent Google's position as such, but we can assume they will take Google's principles on board if they are people from Google. I think you will address that more.

>> JAKE LUCCHI:  I was going to say, yes, we are involved in those processes as stated here. Bringing in our thinking what we are learning from the processes and what's coming out of the processes as well. A lot of conversations are ethically aligned design and brought into our conversations as well. It goes both ways and I think an ultimate way to go moving forward.

>> KUO‑WEI WU:  Question, one, two. Go to the microphone, please.

>> AUDIENCE:  Thank you. It's very interesting that we have presenters from China but I'm kind of interested to learn about the very idea for user writing system being down ranked for not reporting truth. It seems to be indicative of a larger social  phenomenon not reporting and self‑regulating. When it comes to the regulation of AI and regulation society, what do you think are the mitigations and how could it vary?

Thank you.

>> KUO‑WEI WU:  Another question, anybody want to ask? If not, I have a question for all of you.

We are talking about AI and ethics, there are a couple things we have to take care. Just like Chen presented the data. You correct the data legally or not? Maybe some country doesn't have a data production log but in the EU they have a GDPR. Chen has presented the AI start correcting the data. Correcting the data, access of the data is critical parts, one thing.

The second one you mentioned about, the algorithm. You know, the algorithm just to take one psychological might be you see the news already. Involving the AI recruiting system doesn't work because they have a bias to the female. They find it is a bad decision, so they take it off. That is for the question.

Any Chinese person you want to ask about the AI system?

Maybe some of you, you want to answer my question, please.

>> Clarified the social credit system or waiting system for fake news?

>> I think both are set.

>> KUO‑WEI WU:  I think ‑‑ if I'm wrong, let me know. In the western news, general public have a lot of discussion about Chinese using the credit system who arrived to get on the bus to airline or something like that, get on the train. You like to know, is it fake news or how do you work it out of the system? Am I right or wrong?  Are you asking a different thing?  You know the question. Okay.

>> Okay. We want to answer the true question. What is the social credit system, waiting for the fake news, is that right?  The social credit system. I think for the moment the Chinese government is trying to install that system for several reasons. You know, basically, why is ‑‑ I know there's a news report on this, whether this is the social surveillance of societies. I think we cannot use consideration, but there are general considerations of safety issues and credit card. One is a privacy issue in China. People don't use credit card, all around the world, pay, and around the world don't use a credit card. Therefore, it's hard to check the credibility if you don't have credit card.      In China, it is not popular.

                So maybe change the system to check people's credibilities, to push them to behave, for example, not get a ticket, you know, do not behave uncivilized in some public space or not cheating, you know, like this kind of consideration. So, we cannot target it. This is a surveillance but general credibility.

                One thing I said, the credit card system, do not use this in China. I do not know whether you want to say something about this.

>> Yes. 20% of Chinese people have credit card and 80% do not have a credit card and have ‑‑ so, for the rest of 80% of people we have other ways of checking if the credit is good or not. We have online purchasing system that is widely used in China. About 50% or more Chinese use ali‑pay. From the use of the online purchases and online credit lock, they can, to some extent now, whether credit is good or not, this is for financial system and for a wider ‑‑ wide aspects, I don't think China now has a social credit system.

>> Okay, social media credit system I can give you example. ALI‑pay is to create social media and this term. If you apply for a Visa, if you use it to go on the social credit, this term is more than maybe 700 goal, and you can get a simple thing like a Visa, is credible.

>> I just want to understand my information about the social credit system. We do have the social credit system. Perhaps it's not perfect. But this social credit system must lead to the industry, not to personal. That's the question.

>> I was actually going to pick on the previous question as well. I completely agree with concerns around this issue, which I think wasn't quite being addressed in the question so much around if somebody is going to get down rated because of spreading of rumors, how do we actually define what a rumor is. How is it credible the person who ended up spreading the rumor could have known this was a rumor or not, a kind of rumor?

Are you going to get downgraded because you didn't have the ability to check the validity of this?  These are general concerns and concern of the fear of saying something because maybe you didn't have enough ‑‑ you weren't able to check it enough and there's a chance perhaps it would get listed as a rumor and what kind of consequence can this have on you.

So the way that implicates people's sense of being able to express themselves. I would just like to add to this, this is not purely, I think, an issue with Chinese services, such as Weibo, but we see this on western social services ‑‑ social media as well, where they've been pressured to act on fake news, whatever that means, experimenting methods flagging up, this could be fake news but also trying to down rank the visibility of content, based on whether or not people indicated this as being fake news. You get into social dynamics, one person posted something and maybe other people don't like that post and say they think it's fake news and gets down rated. This is a big problem, I would say.

AI from a technical perspective, AI is not up to scratch to identify real news or not. There is not a dataset, what is the reference dataset to say this is true. As long as AI does not understand language, can do translation by comparing this bit of text by other kinds of text and statistically like this but doesn't actually understand the content. So how is it supposed to identify what is true or not?

>> KUO‑WEI WU:  We have only last turn to ask any question. Any speaker want to make a comment or we go to the last turn?

>> I want to bring up this point, actually ‑‑

>> KUO‑WEI WU:  Keep it short.

>> If you look at the Facebook, Facebook is using what was just mentioned. They gave the news credibility, credibility of the news source and people can say whether this is credible or not credible. I don't know if Google has those kind of measures, because in Facebook they have the measures to dispose the credibility of the news.

>> JAKE LUCCHI:  We don't have the ability for users to indicate whether something is fake news or no, up or down based on that. The way algorithm uses is a signal but we have a fact‑check tag publishers can use to fact‑check stories and a link and see that on search results in the link. It doesn't really have an effect on the stories ranking in search. The algorithm doesn't use those user signals and we don't use that approach at Google.

>> KUO‑WEI WU:  Wait a minute. Please raise your hand if you want to speak, the last turn, please. Go ahead.

>> I think both Google and Facebook use the third part to take effect to the system.

>> Fact‑check.

>> Your company has the technology to detect that or just the use of the third part?

>> JAKE LUCCHI:  Just the third party. We haven't been able to find a way to use AI to detect fake news, basically for the reasons Ansgar mentioned, AI is not good at context sensitive issues. The exact same piece of information captured in a news report that's debunking that particular myth versus supporting that myth, AI is not good assessing that context. We haven't found a way to do that. That's something there's a lot of interest in but haven't found that to be reliable. Even hate speech AI is not very good yet which is easier than fake news. We're not very good. We're getting better at terrorist content and easier, and fake news is even harder than that.

>> KUO‑WEI WU:  Raise your question. Please go ahead.

>> AUDIENCE:  Hi, everyone. Can users have access through the criteria of the process of rating in China? Do you have accountability policies?

>> KUO‑WEI WU:  Another one, go ahead.

>> AUDIENCE:  So if the government is using a private company algorithm of AI system and they want to pull together the algorithm and look at it to explain the decisions that has been made, if AI is used for government decisions, how do you handle the IP situation?

>> KUO‑WEI WU:  Okay. The government here. Anybody want to answer her question or his question?

>> Of course, we published our standard for this system and to the other public. Every month we publish more details about in this and how many people, just to the high score, and monitor on the web. It's published for everyone. We have customer service to answer the question. If you have a question and we get to the credit score, you can ask customer service and they answer the question. Everyday, we have maybe just 2,000 customer service.

                Yes.

>> One question, the original authority. The question about AI being used for government use and a private company. I believe this is necessary to actually make sure contractually you can do that, if you deal with sensitive matters, and you will need to actually explain.

>> FELICIEN VALLET:  If you have a user or citizen to answer why this was taken from me. I believe this was obviously something that has to be done. I think this problem, issue can be raised by contract with the company that provides the software.

And accepts the terms not stated, others they have the data by themselves, also downloaded sometimes in France for several matters.

>> KUO‑WEI WU:  Thank you very much. I think our time is up. First of all, I thank the speakers getting here on time and for their time. Second, I really thank you for the audience, you're very cooperative. Really, a lot of very good questions.

I thank you all as speaker here and the audience.

[ Applause ]

>> KUO‑WEI WU:  I think this meeting is adjourned. Thank you very much.