IGF 2017 - Day 2 - Room XXIV - WS91 Policy Challenges for AI Development

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

>> MODERATOR:  In this workshop, as you can see the title,

we mainly focus on the policy challenges for AI, such as issues on data protection, ethical considerations in AI’s design core value, et cetera.  To facilitate the diverse discussion, we are happy to have multistakeholder representatives here to share their insights on what policy change AI will encounter, and how to create and enable policy environment that’s conducive to the sustainable development from different dimensions and as (?).

     Please let me introduce the speakers on the panel.  First, Mr. Satish Babu.  I’m sorry, maybe he -- for some reason he is late.  Sorry.  Yeah.  Sorry.  Sorry. 

Mr. Satish Babu, Chair of ICANN, APARLO, funding director of the International Centre for Free and Open Source Software.  Welcome.

And, second, Ms. Acali Naguchi from Japan.  She worked for public policy in corporate governance in Yahoo! Japan.

Mr. Claudio is still on the way.

And, then, Mr. Ima, Professor of Electrical Engineering and Computer Science, UC Berkeley, panel of IEEE and ACM.

Now, I will stop here and leave more time to them. 

And, first let’s hear Acali’s sharing.  Acali has been worked for public policy and corporate governance at Yahoo! Japan corporation.  Please.

>> PANELIST:  Thank you.  (Not translated).  I will speak in English moving forward.  That is the only French I know.

Okay.  So, it’s such an honor to speak at IGF Geneva 2017 to see you here.  Thank you all so much for coming.

Today, I would like to introduce what kind of AI we are using, and we will be using for our service, and what such challenges for AI development from the Japanese private sector's point of view.

Okay.  Go next, please.  Next slide, please. 

So, before going to the main part, let me quickly introduce our company.  Yahoo! Japan Corporation is a separate company from Yahoo! Inc. that most you here probably know there was(?).  Actually, the Yahoo! corporation is a joint venture of Yahoo! Inc. and the Japanese company called Softbank, so we have our own policy, independent policy performance management. 

So, in Japan, Yahoo! is quite strong, and it is (?) still No. 1 with 65.7 billion monthly, so actually, that is bigger than Google.  It’s kind of unique market.  In other markets, unfortunately, the markets are not the same.  Still, in Japan it has a massive impact to Internet field and Internet users, so we believe it is our responsibility to lead the market, both in terms of the policy making, and also the service development.  Of course, that is for AI, too. 

Next, please. 

Let's move on to today's theme:  AI.  By the way, what is AI?  It is a buzz word across the world, but its definition quite depends on the context or speakers.  Is it a super computer that process millions of data in one second, or is it some kind of (?) that we see in the scientific, like a fiction movie that, like adventures start fighting against the human beings?

There are, like, various definition AI from so-called weak AI, that mainly focus on the specific applications. to so-called strong AI or general AI that performs like humans.  We do not have the general AI that learns, speaks, thinks just like humans yet; however, we already have some particular AI application, such as like listed here.

Next, please.

 

   So, let me introduce easy examples of (?) we currently used

in Yahoo! service.  One example is shopping recommendation that

recommends the users with the products or (?) the purchase by

analyzing the user data.

Also, our Yahoo! top page mobile divestors are personalized for each user with the most relevant news content to appeal to them by analyzing their user data.  Likewise, by comparing the browsing history and monitoring, we show the most relevant advertisement to users.

We also use the machine learning for the Email filtering for respond emails.

Next, please.

So, we also use the machine learning for the security purpose.  We hold lots of -- Yahoo! accounts with user’s important information connected, such as private Email history or, like, their banking account for the eCommerce purchase, so it's very important for us to monitor and prevent the fake log-ins.

As of now, our expert patrol team import the fake log-in (?) to the system to stop any suspicious access and check the risk, but moving forward, we plan to use automated by using the AI to create the suspicious (?) list itself to find and prevent the new method of attacks that never happened before.

In the Beta test, the accuracy was 100%, and it did increase the amount of accounts we can check and enable to process (?) more efficiently with less amount of time.  It's like the current way is like on the red circle here, and we use the AI to stop more suspicious log-ins that would be pink among all the fake log-ins.

Next, please.

Another example of AI patrol is for our action service.  We have the biggest web auction service in Japan with monthly 25.8 million users, and they're always more than 52 million items on sale.  Since the scale is quite big, suddenly there exists lots of fraud.  So, tip I can call fraud is like selling the fake brands, like Ray-Ban or Chanel, sending the actions after the data is done and just receive the money from the users.  We, of course, have the guideline to prohibit those fake fraud transactions and have the patrol for 24/7.

Next, please.

We've been using the machine learning to restart the older suspicious accounts or transactions based on the fraud patterns we found in the past, but, these days bio raters are really IT at the best to do attacks by programming, or they create new pardons approach so quickly, so in order to maintain our auction market quick and safe place, our service -- we service providers are also required to keep evolving and we assure AI will do a lot for that.

Our future patrol will be enriched by AI by running the past and predicting the possible future patterns.

Next slide, please.

So, lastly, I would like to talk about the challenges we probably face for the AI development.  As AI can be related to so many fields, the challenges can be also diverse from psychological effect to humans to how to foster the AI talent.  But here today I would like to point out three things:  Responsibility, accountability and maturity.

Responsibility is the question that who will be responsible for the AI decision?  AI doesn't only follow the program ordered by humans, but it keeps on changing so it can result in the unexpected way.  If something that was not initially intended happens, is it AI itself or service provider or algorithm programmer that is responsible for the result, and if so, how to take the responsibility.

Also, the accountability is so apparent for the transparency.  If it is human that do some action the person can explain the reason why for the action, but for AI, we cannot know the whole reason of that, and even if there is a lack of some important element or mistakes, it is difficult for us to know all about it.

Given these two, I think we should be very careful for which field we use and how to use AI.  Especially for the businesses or social system, that has direct impact to people's lives, such as, like, recruiting or insurance estimation or credit accounts.  We better have human point of view judgment, too, and not use AI alone.

We must not let AI to systematically exclude a person from the society.  Remember that AI is sort of lab boring and it may be right for analytic point of view, but most importantly, human faith and understanding should be respected.

Lastly, the maturity of the AI market.  As I mentioned in the beginning, we don't have the general AI that thinks or acts just like human beings yet, but at this stage, we have some AI kind of applications on the development stage which process some work more efficiently than human.  So, AI is not mature and luring it too early, especially sort of like a hard rule type regulation, may dissent Tau great its development that would contribute to our society in good way.

So, at this moment, the government, private sectors and societies should keep eyes on its growth, consequently discussing the possible affairs, and if we are ready to set some rules in the future, we should respect the important buyers, such as privacy, as well as the international organization and not really like stick to one Country or divided Country rules, because the upcoming Internet (?) will be even more global and, again, the AI will be very smart, like smarter by utilizing wide variety of the data across the world.

And, most importantly, we have to earn the social trust to utilize the AI to the most, but is it enough just to further regulation to earn social trust?  No, that's not enough just say, like, hey, we don't violate any regulation so we're okay.  It's not that way.  Yeah.  We can earn the social trust not only by (?) respecting the regulation, but also by the morality.  Especially for AI.  The morality is the big issue that everyone concerns.  So, we better start discussing, like, what is the important moral factors we should really respect for AI development while waiting for the AI technology to bloom.

AI era has just started, so we yet to come.  So, yeah,

that's it.

Thank you.

(Applause)

>> MODERATOR:  Thank you for your introduction of AI. 

>> Mic. 

>> Okay.  Thank you, Kil, for your sharing.

 

Next speaker is Mr. Professor -- no.  Next speaker is Mr. Satish.  He is chairman of ICANN.  He is also the President in computer society of India.  It's your floor, please. 

>> Satish Babu:  Thank you very much, my name is Satish Babu, and I represent the Civil Society and I am part of ICANN and I cock and IEEE.  I of course, be looking at the issue of AI policy from the India perspective, but also with specific relevance to something that Acari just mentioned.  She said morality.  I use the word ethics of the whole artificial intelligence.

People have called artificial intelligence as the final invention of human kind, meaning, you know, everything else will come from this invention itself.  Acari briefly mentioned the difference between artificial intelligence, AGI, and artificial intelligence.  AGI is a sentient entity.  Now, what this means, basically, are sentients is a capacity to feel things, like experience things.  A sentient is the intellectual capacity.  In the near-term future entities that can both feel and also intelligently act, then you have the artificial gentle intelligence.  We are not quite there yet.

And, the issue of non-human agents, which do tasks for us, it actually coming up already.  There are many instances of these kind of agents.  There are also things of coupling our human brain with the network leading to things like can we download a brain to the Internet?  Can we therefore have Internet on the Cloud?  These entities interacting among themselves? 

And, what will be basic implications of this (?)?

If you look a little bit into the future, we see that there

 

are several dilemmas that these developments are going to pose

 

in the future.  Some of it is, you know, the act of what happens

 

when AI automates things.  What happens primarily from the

 

human perspective is that human beings lose jobs, unemployment. 

 

Now, as a policy, we have to consider what happens to the

 

workforce when we start bringing in technology that takes away

 

some of these jobs.

 

What about the wealth created by machines.  Machines are going to create wealth in the future.  How is that wealth going to be distributed amongst us in what is an equitable model of distribution of such wealth?  How will our behavior change when we start interacting with the machines?  Some of us are already (?) to the Internet.  When we have machines, and already we are talking about automation and replacing the brattle where we're going to have entities and not just real human beings.  So, how is that going to change our behavior?

When you have artificial intelligence, you also have artificial stupidity, you know, which is -- we tend to make mistakes when we look at -- when we interact with this, you know, machines.  How about racist robots.  Now, there have been many cases for example a bank was using an artificial intelligence algorithm to rate loan applications and after a while it was found that the rate of approval, rather the rate of rejection was much higher for blacks.  Now, it is not built into the system for that, but it turns out whatever parameter they use is actually causing this bias.  So, will we have this issue of bias coming up?  One doesn't assume that it is injected into the AI, but it spontaneously happens.  How do we deal with this bias?

Security.  How do we keep our artificial intelligence safe from whatever enemies?

How do we predict against unintended consequences what happens if something totally unexpected happens as a result of these machines?

How do we remain in control of what we create?  There has been a book called homo views, that means the human God or humans which have been God in terms of creating these say pee encentient beings.  The question that comes up is will we really be able to challenge and stay in control of these things when they become really antonymous.

And finally, do robots have rights like human rights?  Are there robotic rights?  Do we have a model obligation to these machines, that we treat them in some humane, but I don't see any other word that corresponds to humane?  So, do we have that obligation?

There are also other challenges that come up.  When you have a system that can parse spoken language, that system can then listen to all conversations.  So, what happens to privacy?  We can't expect machines to have empathy or a sense of the context.  So, what happens to human dignity.  Jobs that require suppose you talk about nurses or psychiatrics or people attending to old people.  These require compassion and context.  Now, machines do not have that.  So, what happens to human dignity in the process?

The issue of transparency.  Now, these algorithms are not something that we can look at tangibly.  Take example of artificial network.  There are no rule-based algorithms.  It is adaptive, it learns, and it cannot separate it out for review, audit, et cetera.  How do we ensure transparency?  So, these algorithms are in pro pry it Terry companies.  They are not open source.  So, how does humanity have control over these?

And, finally, the issue of (?) we are talking about little (?) or LAWs, which has been an active domain of debate in the last few months.  At least one nation state has said that they will continue to develop it, although the UN has called for a ban against these things.

So, in summary, I would like to track some of these issues and maybe the discussions we can think of how we can address these ethical challenges that AI may pose.

Thank you very much.

(Applause)

>> MODERATOR:  Thank you.  Next, we will introduce

Professor Claudia Nasana -- long name. 

>> PANELIST:  A problem with Brazilians. 

>> MODERATOR:  He is the -- yeah -- he is a Professor of Law, center for legal status Prohiba State University of Prato.  Let's hear his view. 

Thank you. 

>> PANELIST:  Thank you very much.  Good morning all of

you.

My idea, first I want to thank the organization for bringing me here.  I am developing research on law enforcement measures, automated law enforcement measures in a consortium between Paraiba State University, I don't know if you are going to be able to present the slides I brought, Paraiba State University in Brazil and the foundation for science and technology in Portugal.  The focus of the research is automation of law enforcement that embed automation measures that embed law enforcement properties.  That is the focus of the research.

So, I would be here joining you anyway, because the State of being organized the panel, my Chinese institutions and knowing that the State of employment and implementation of AI tools in China as such, I would be here following you anyways.  I would like to thank you for giving me the opportunity to join.

I would like to draw a very brief outline.  I've heard of problems that (Audio pause) that were here, and I would like to bring to you a very brief over line of the policy initiatives, what is going on in the world.  The idea is to give you a very quick glance of who is taking care of this policy initiatives, how are they doing, what are they deploying, and what are the remaining challenges?  I don't think we have the slides.

It's a flash.  Is there a problem?  I sent to the Email.  Can you access from there?  I think I'll just continue from here and then make the presentation available for you.

Who is responsible for this policy initiatives?  There are a number of institutions that are currently deploying them.  We have professional organizations, as the IEEE which is development of very interesting initiatives concerning policy development.  We have joint initiatives as the one we have in November in Brazil, artificial intelligence and inclusion, which was an initiative of research centers across a network of centers of Internet and society centers across the world, particularly this one organized by con sore she was between ITS Brazil and the Bergman Klein center.  We have NGOs, which are also taking care of the issue, and we have a couple of Governments who are also deploying, starting to deploy policy initiatives.

How are they doing this?  Well, IEEE is developing a document which is taken in goods and developing into phases and guidelines and recommendations and deep analysis.  The symposia and the academic institutions are usually gathering how it worked in Brazil was that researchers gathered from 80 countries for three days in Rio trying to first identify the research questions, what was controversial in the area, then how it could be scaled across the world, and what the challenges would be for the future.

For the other institutions like the NGOs, we have the example of article 19 which is proposing new convention, which they, for the time being, called Internet of rights with a couple of recommendations, including ones concerning automation, also.

And, from governments we have initiatives like the UK parliament, there is a committee on artificial intelligence with the house of lords.

For one of the last acts of the Obama administration was also the release of a report concerning economic developments of artificial intelligence.

The European Union beginning of this year in 2017, through a report that was conducted by MEP Madiva from Luxembourg also presented a report on robotics and civil law on rules on robots.  And, there are other initiatives, for example, what happens in the United Arab Emirates.  We have just been in ICANN for the Abu Dhabi meeting.  Long and sort of controversial meetings.  The United Emirates just announced they are appointing a Minister of state for artificial intelligence.

So, what are these measures in general.  Essentially, they are guidelines and principles.  If we take a look for example at what is happening in the development of IEEE, there seems to be -- there are people in the room who are working specifically with that document, but it seems that as the GDPR has moved through -- to a privacy by design for the privacy issue, IEEE is moving to a safety by design concerning human needs in their documents, and they are also preparing and presenting and opening discussion for guidelines and principles.

Another issue, another measure that has been concretely taken is the proposal of agencies or regulators.  One article a couple of years ago from Andrew Tutt who used to be a lawyer for the Obama administration proposed an FDA for algorithms.  That is the title of the article.  Which was supposed to be an agency, a government agency in charge of analyzing allege rhythmic account transparency, the problems that were posed by Professor Satish.

In the case of the United Emirates, it seems they are moving strictly directly to a regulator and last to an agency.  And, of course, there are also new legal institute.  Once again referring to the problem Professor Satish, the EU report on civil law and on robotics proposes a very, for the time being is still very controversial article, 56F, if I'm not mistaken, which proposes a different legal status for robots in broader English comprehensible terms, legal personality for robots.

The report of the European Union also advances into agencies and regulators when it proposes, and this is an interesting differentiation.  It does not propose a general agency to tackle all the problems involving AI, but it says there are uses of AI that are particular delicate, particularly sensitive, and for these ones there should be regulations through an agency.  So, these are pretty much the initiatives that we have.

What are the challenges we face right now?  In the presentation I'm going to make available to all of you, there is a blog by Professor Uglaser from the Berkman Klein center who was with us yesterday for the artificial inclusion workshop as an update for the symposium.  In the middle of the year he has a blog post where he lays down very reasonable guidelines to tackle what are the challenges, what do we face ahead when we talk about policy development for AI.  There are a couple of them.  I'm going to present two or three, and then I'm going to at my two cents to the discussion.

He says, for example, he urges us to look at AI as a set of techniques across different applications and not as a single monolithic technology.  Sounds like a very reasonable thing.  There are very different, very diverse applications that may embed and include AI and we should not look at it as a one thing only.

He also says that law is important, but it's necessary that regulators look at other governance tools, maybe law is not the only -- or at least hard law is not the only option that we have here.

He says that given, also one of his recommendations is given the disruption power of what we're talking about, it might be necessary to just recode some aspects of the rule of law, and this calls for a very strong creativity from the part of regulators and lawyers.

And, the last one I'm going to highlight from his that he

 

points out that responses might vary across sectors and across

 

jurisdictions, and we have to be ready to leverage and to

 

balance these differences.

 

I'm going to add two of the things that we have been coming across in the research as challenges for policy development, and I'm finishing with this one.  There is a problem with training data sets.  I mean, there are a lot of problems with training data sets from the proper tier nature and character.  Not all data sets are absolutely open for everyone to train and to use and to deploy and to feed into the systems that they are developing.  And, also, the accuracy.  After one and a half year of the research, we have talked a lot about allege rhythmic bias and allege rhythmic transparency one and a half year ago, and through different talks through IGF and other forum with Professor, particularly Professor Kanatoshi, the repertoire for privacy, I've come to not totally convinced but mostly convinced that we're not the algorithms themselves pose less of a problem concerning transparency.  There is the problem that they are dynamic or organic bodies of code, so as such, it's impossible to audit them.

But, then Professor Kanatoshi says if there is remedy, then for the purpose of rights, I think we have enough of a way out.  So, that is an interesting take from me.  But, apart from that, apart from transparency, it is important that we set the freedom for the training data sets, that they're proprietary in nature, and the bias of data sets themselves, not so much of algorithms, but how, what is the quality and what are the standards that we're going to give to the data sets?

The last take is stemming from the artificial intelligence for inclusion.  I think we should think of AI.  I think we are before a technology that may, in a moment in history, that may mean a broader inclusion than we ever had through artificial intelligence techniques, or a broader concentration of power than we ever had.  So, this is the reason why we have to decide, and this is the reason why we have to act swiftly and intelligently and in collaboration, because if we do not do anything concerning the development of these techniques, they are just going to catalyze economic, social, geopolitical trends and make the centers of power remain where they are in more concentrated than ever.  Otherwise, if we look at ways to distribute power, economic value, if we take, for example, the recommendation of the world economic forum report of digital dividends, that we need to share the richness that the digital world is sharing, then we may be heading for a better future in terms of deployment and implementation of AI techniques.

Thank you very much once again for your attention, and I’m ready for questions if you have them. 

>> MODERATOR:  Thank you, Cladio.

(Applause)

>> MODERATOR:  Next speaker is Professor Mai, who has been engaged in the field of AI, especially computer region for many years. 

It is your time now, please. 

>> PANELIST:  Hello everyone.  I would like to thank IGF and also the organizer of the workshop of having me.

So, I'm not an expert in, you know, policy makings and also more like on a technological side, so probably today just provide you a little bit of perspective from a more or less more purely technology perspective.  Maybe some of my experience may provide you some thoughts about how, maybe, future policies for artificial intelligence would have impact or influence on our field.

So, I'm from UC Berkeley.  So, this is a little bit of history about myself.  I spent almost, you know, entire career doing research in artificial intelligence, especially in data science.  Looking back on my career, I spent almost equal amount of time, half of the time in US after graduated from Berkeley, and half the time in China, and particularly I was a manager of the computer vision groups, Microsoft research in Asia, which very much incubated almost all the leaders of the current computer startups in China that you heard about since time and face process, as well.  So, recently I returned to UC Berkeley as a Professor, so back to research. 

And, so, first of all, I would just like to give you a little bit of, you know, personal history about my encounter with some of the technology that you are familiar with today.

Next slide, please.

So, my research I started when I was a graduate student at UC Berkeley and we were actually very interested in how to perceive the work for innovation that was about 20 years ago.  At the time the (?) was not even a field.  So, we tried to understand how the machine would be able to perceive, interact and was (?) world.  This is a book we wrote back then, which featured a few applications we'll show next.

Next slide.

So, this is actually a project that was done at Berkeley that was actually a federal project, also through the California government.  It is about intelligent highway.  That is about 1990's, late 1990's.  Actually, my Masther’s thesis is precisely how to use computer to do (?).  The project was very successful.  Actually, Al Gore was in our car riding about 80 miles per hour at the highway.  Didn't get any incident.  So, but actually took a while, right.  So, at the time the technology is there, and it actually quite works amazingly, but you wonder why you never really gain any traction in the industry.

So, by nowadays, if you look at after about nearly 20 years, suddenly we see autonomous driving everywhere.  There are hundreds of companies nowadays doing startups or trying to get autonomous car driving into the market.  Like Google and so on and so on.  Many companies in China.  So, make sure the reason why this in a second.  Has to do with

policy. 

After the car we actually at Berkeley started autonomous helicopters.  Started with, you know, a Navy project trying to have helicopter living on a moving ship.  So, it's actually using, as you can see at the time the technology very much can emulate the best position in terms of positioning and orientation, as well.  So, next, please.

So, nowadays you see that after, again, after about 20 years, you see all this technology gets disseminated into the consumer market and there is UAVs everywhere, the palm size, and so one even landed on the lawn of the White House, as well.

So, next slide.

So, the first question is, you know, we have this technology 20 years ago, why they were not really -- of course, many reasons.  Maybe the price.  Maybe the maturity or technology.  But I know one reason that actually very much slowed down the after California passed project, the government and really tried to disseminate technology into the automobile industry by really sees a lot of obstacle.  One particular obstacle, from what I know, is United States has too many lawyers.  So, that -- so the insurance company is actually the obstacle behind trying to prevent having too many sensors or too many, you know, automations in a car, for whatever reasons, maybe some lawyers afraid they may lose jobs.  Maybe that is a good thing.

So, which gives an example, my first encounter was yeah, some societal regulations or policy actually cany the dissemination of technology, which was a big less on for me at the time, was a student.  Of course, later on we are entering into living to UAVs.  So, we also, all the news these days, right, for example, airspace.  There was an incident back in China that some people flying UAVs around an airport shutdown the entire airport because sort of endangered the civil aviations.  Also, there was that UAV, some nuts fly UAV over the fence of the White House and caused some incident.  And, also there was an issue that also because there is UAVs become very intelligent to have cameras, they can watch things, they can analyze what is going on, I heard there was an incident the U.S. Government is suing DGI for violating certain privacies.

So, all these things, as you can see, technology can make wonders, right, but somehow, it's very complicated things to have those advanced technology to enter our lives, because they effect so many -- you know they change our lives.  The effect many different aspects that as engineer, we probably do not anticipate ahead of time, right.  Our job is to try to make the technology as cheap, as good as best possible, but so it's really a

much bigger societal issue to discuss the implications of those technologies in our lives.

From my experience, I've always very interesting watching (?) in a different seen technological advancement.  In particular in the markets I know very well because I spend most of the time in U.S. and China.

If you look at the two countries, in terms of number of companies, size of investment, market value, number of patents filed, it is kind of evident that U.S. and China are the two leading countries that -- in the development of AI technologies, so here is a little bit distribution numbers that I borrowed from Professor Tuman, dean of the business school of Ching Wui.  You can see first companies have 65% of the countries, especially U.S. and China.  Also, the patents filed in 2016 was also U.S. and China very much and also Japan not that far behind.

So, in terms of research in AI, of course, both the United States and China, actually, led by a large margin, of course we just saw, but there is a little bit difference.  The U.S. university companies focus more on the fundamental theory technology and assistance, whereas AI research in China is mostly application driven, product driven.  So, in terms of research, the United States excels in impact and China wins the number scale because it just has a very large population and also market.

For technological standpoint, the United States very much dominates in fundamental technology, such as chip design, manufacturing, and several operating systems, computing platforms.  Whereas Chinese very much focus on applications, such as face recognition, speech recognition, and also due to its very large market.  Again, in terms of technology, United States Excels in depths, but China really wins in scale.

In terms of talent in the AI is kind of interesting story.  If you look at the top 30 universities, top AI groups of the world by whatever, you know, academic standards, three-quarters are in the United States, but China has none right now.  On the industrial side, the United States already has five companies with over 5,000 employees R&D related to the areas, but China has none.

Of course, the AI teams in China, companies like Tensent or Fifans are growing like crazy right now very quickly due to China's market ban.  We anticipate that.  This is a little bit of numbers I got a couple days ago, this sort of AI top hundred, you know, startups worldwide, I think, so you can see already there is quite a few of them.  It's a very large portion of actually Chinese companies, you know, the rest of them also U.S., as well.

So, next slides, probably.

Yeah.  So, as you can see, the one thing that -- the good thing about directly touch the market is that the incident I encounter is that these days when we talk about policies, right, also we talk of policies that tried to address people's concerns about AI.  You know, security, privacy, endanger or it's going to have, you know, we all know that technology is sort of a double edge sword, and there is a very interesting story, and if we use -- can we use also design policies can encourage AI or companies to do greater good for society.  This example probably people do not really anticipate.  It is kind of amber alert for, you know, like in U.S. we have whenever a child missing or missing person where, you know, shows messages on cell phone or some of the billboards on highway.  This is actually because this really media company, control the source, operation media, and also dissemination of consuming the news and (?) so they have a very interesting app that, you know, whenever this there is a miss be person, you work with the company and because they know your location, the profile of the users, so they actually can very accurately push information about missing person to areas that help go to the right group of people to help with locating the missing person.  So, just far already for example nearly 25,000 personal messages, that -- every week that can reach 20 million people.  Imagine that, 20 million people can help you to find the missing person, and also covers 200 cities in China.  So, I think about any standard, I live in the U.S. any year, and very successful, but in a way really no comparison to this kind of efficiency and scale.  And, so far it already helped identify over 4,000 confirmed cases missing person cases by just using this simple app alone.  So, this is something, I think, hopefully for the future policy makers to think about, really how to use at the policy level to encourage company to do the greater good.

So, next, please.

And, of course, finally, in terms of investment, in terms of future, right.  We are interested in investment.  So, far again, U U.S. and China is very much the largest investors in the AI technology, far beyond the rest of the world, probably, based on numbers from 2016 the venture capital investment in United States, obviously larger than China, but you can see the investment in China is already comparable in scale to U.S.  Again, we see the same kind of focus.  U.S. very strong fundamentals, processors, chips, learning theory and also platforms, but China is very, very strong

in exceeded U.S. in terms of all the applications, computer vision, auto driving, ought p autonomous driving, and force.

So, this is -- the reason I put those numbers, I'm not trying to make contrast, but really just to give the policy makers something to think about.  You have to have policy works for sort of two extreme worlds, right.  So, here is very free market but there is a lot of application driven fundamentals, west versus east, and also just a while ago recently China, actually, government has made international strategy to strengthen positioning artificial intelligence and tremendous government support in the next decade or two decades.  But, on the other hand, on the U.S. side is completely very much industry driven very much free market type of models.

So, next, please.

So, this gives just a few things to think about.  Really whatever future policies for -- we live in a very complex world.  The policy has to accommodate the difference of markets of different nations.  U.S. and China are two examples, or you can call them two ends of the spectrum, free market versus government driven applications versus fundamentals, how you can get them to work together so they have a win-win situation, rather than competing or even boy cot each other.  So, also policies that resolve concerns with securities, safety and the privacy, I think that's -- just like Internet I think has similar concerns.

Also, you know, personally I would like to see policies that really encourage companies, AI technologies to do good for the greater to the society.  Can we, you know, be able to just voluntary things.

So, I hope that, you know, that difference will not divide us rather than allows us to collaborate and to complement each other, and also through a multinational and maybe multistakeholder discussion and collaborations we will be able to reach international baseline for the governance of artificial intelligence.

Thank you very much.

(Applause)

>> MODERATOR:  Thank you, Dr. Mok.  Thank you, all the speakers, and thoughtful views that may take further and careful thinking of AI development.

Now, let's move to Q&A session.  Anyone has questions, please raise your hand.

Okay.  You. 

>> Audience:  Hello.  My name is (?) Laqua.  Working for Standards organization.  So, I want to thank all of the PANELISTs for the very interesting presentations.  So, I am mostly interested in the topic of standards.  So, listening to Mr. Dusena, I could understand there was some work on the engineering labels for standardizing AI.  For instance, IEEE.  Mr. Omar also talked about importance of policies, but I think that there were different areas that are very important for AI, and correct me if I'm wrong, because I'm no specialist.  There is a technical aspect, there is a legal aspect, there are also business and economic issues, and I was wondering, according to you, what would be the best way to take into consideration all of these aspects into coherent manner, because sometimes people are very much specialized in one area, be it technical, legal, or business, but with AI, we need to take into account all of these at the same time.  So, how can we bring people with different specialties on the same table and ensure that all those aspects will be taken care of, especially on an international level?

Thank you. 

>> Panelist:  Thank you very much.  You couldn’t pick an easier question to start with, could you?  (Laughter)

This problem of interdisciplinary by the way is one of the challenges Professor that he mentions that I didn't during my presentation.  It's already there.  And, pretty much any colloquium on artificial intelligence gets to that point.  The one in Rio, for example, was particularly interesting because it already gathered activists, journalists, data scientists, policy makers and computer scientists and engineers.  And I think that is a start from international perspective.

I would like to bring something from another -- to import from another environment, and that might throw some light in what we're looking here.  The problem of security through cryptography and legitimate access by authorities to content have been tackled for long in conferences of activists on one side and conferences in law enforcement on another side.  So, I have a double nationality, I'm a computer scientist and a lawyer, so I used to be with activists on one side, with law makers and law enforcement authorities on the other side.  They hardly ever talked.  And, it was a pleasure to me in I think May last year, there was a huge gathering in the Hague at Euro poll where everyone was there, and I believe just the criminals weren't there.  They were the only group that were not represented, but activists were there, Civil Society was there.  The judges in Belgium, for example, who are responsible for counter terrorism measures this, we're talking together.  There is no magic to the very important and difficult hurdle that you mentioned.  There is no magic.  There has to be dialogue.  This dialogue is not easy, because the jargons are different.  The languages are different.  The problems they face are different.  So, spaces like this one here, like IGF are essential for that, but other more practical work like IEEE is developing in, like, the Governments are also developing.  May as well be a way forward.  There is no magic.  It is dialogue. 

>> MODERATOR:  Okay.  Sir, yeah, okay.  Yeah. 

>> Audience:  Hi.  Good morning.  This is Wally Bacardi from secretariat.

My question is this, just about the -- on the line technology of AI, which is software engineering.  And, I don't understand, you know, the verification and the authentication of the software.  I mean, rely much heavily on modeling and for malaxation process, okay.

And, now, do you think, my question is coming to the PANELISTs, anyway.  Do you think these processes and (?) be enshrined in policy framework to serve as a good lines, as good lines and best practices, or would those help to extend the concerns and the challenges of AI?

Thank you. 

>> MODERATOR:  Question for all the speakers.  Okay.  Mr.

Satish. 

>> Satish Babu:  Thank you for that question.  (?) has produced that document called a line design, which is a group, and they are also considered 7,000 standards working group, which is working on standards for align design.

Coming to this question of how do we review, audit the software and the principles encoded, embedded in the software, AI.  It is not an easy question.  Again, the problem is on the one hand, the rules are not really laid down as rules that we can see and touch and discuss.  On the other hand, most of the algorithms are proprietary work of companies.  And, they are not out for, you know we can't discuss it in public in a forum like this because it's the property of intellectual property of a particular company.

The third problem is the engine and the data, Professor touched on issue.  The engine and data are different, and the data is actually refining the engine on a continuous basis, so there are these challenges that are there when we talk about auditing, and human rights approach to the problem is also has been mentioned that as long as we can have regime that will look at the output of these engines, maybe that will safeguard the concern that you are raising.

But, to me, that does not make full sense as an open source activist.  I would have liked that there has been some formal review of these algorithms, and in a forum like IGF, a multistakeholder process for that.  But, given the fact that it is a proprietary property of companies, I don't think that is going to happen in the near future.

Thank you. 

>> PANELIST:  I think there is things we can learn from the Internet.  I think the public property has, you know understanding of technology may require some time for the public or people from other areas to get about artificial intelligence.  So, one thing I'm trying to say is there a lot of misunderstanding what they are capable of or not.  Even within our field there is people to not necessarily understand very well.  So, just like Internet, I hope the policy in future will be as minimal as possible.  I mean, safeguard the most important principles, but not to add too many constraints, otherwise you can kill, never see the kind of prosperity that Internet has brought to us, the technologic.  Precisely we have very open, very tolerant policies.  And, of course, safeguarding all the values we cherish.  But, adding too many constraints in very early on actually can probably either divide the industry or the industry will refuse to cooperate in any way, or even the government will refuse to cooperate.  So, that's something I just wanted to very technical standpoint.  In order for the technology to evolve, still the AI technology in my view still very much at its infancy, right.  We see promises, but many of it is still just promises.  So, that's -- we really want to encourage this world benefit, the world.  I'm pretty confident as long as we are using it in the right way.Thanks. 

>> MODERATOR:  Okay.  Thank you.  Any other questions?  Sorry.  Yes, miss. 

>> Audience:  Just a comment.  I am part of the article global initiative for AI ethics and I also chair the outreach committee, as well as the standards, so I can maybe pour in some light on some of the questions that happen here.  The question of multi-disciplinarily is a big one.  I think that we are doing some significant work on it, so P7000 series incorporates experts from many fields.  We're talking about ethical design, we're talking about safety, we're talking about bias, as well.  And, it's open and free for everyone to join.  We are actively looking for members to help us shape these standards.  And, I think that in regards to the ethical and design output, when it comes to really leveling out the playing field and setting up fair rules of the game, there is a lot of work to be done.  That is for sure.  But I think that we are at a point where we have many members, including members of the (?) company that are actively approaching us and asking us to provide them with help, because they are not sure how to do it.

So, then again, this is also free and completely open for everyone to join, because this is something you're interested in and you think you can have an impact or have an idea, then please feel free to reach out to me, and I would really love to incorporate you to do a better job. 

>> PANELIST:  May I add.  If I may add.  Thank you very much.  It was very interesting to you that you mentioned that initiative and that invitation.

I would like to go from a comment from Professor Mar.  It is good to have someone from the technical aspect of the development of technology saying that it is in its early stages.  It's very important, because the hype is very high for the past few years.  So, we have come to a point where we believe we can pretty much do anything, and we will do a lot.  A lot will happen through these techniques and through these methods, but we can't do everything, and we listen to proffer Mat, we listen to Jeffrey Hinton, we listen to -- and they repeat the same thing it seems, and it is something that the industry might not like to hear at this moment that the technology is available for much more than what it actually is mature for, and I will leave you with an example of what happened to Mr. Eric Loomis from Wisconsin, whose possibility of committing a new crime during a trial was analyzed by a algorithm called compass correctional offender management profiling for alternative sanctions.

The algorithm itself through the inputs that it was given placed him in a high rate of possibility of recidivism and the penalty was set on the basis of that output.  Mr. Loomis challenged the rules through which the algorithm set those rules.  He was denied access to the way the algorithm decided.  It was challenged until the Supreme Court of the United States, which answered in July, in summer, that the output was enough.  It was not necessary to look into the way it processes.

This doesn't seem to be a right solution. 

>> MODERATOR:  Yeah, to the gentleman.  I think, yeah, like before going to start, like, Mr. -- yeah, before going to the hard rule, we start having more mature discussion and have, like, some model, robot 3 principle, like at least keep the human dignity and like the start point at the scratch, I think.

Thank you. 

>> MODERATOR:  Thank you.  Appearing just wanted to touch on a point that Mr. Satish made about unemployment.  Each Country trying to get an edge, but at the same time it seems as if they're sawing the branch on which they're sitting if productivity de-couples with wages, what is happening right now, you won't have anyone to sell that stuff that you're producing in mass productions to.  So, in a sense you're basically undercutting, undermining the economic system as AI advances, and so it's kind of like a dead-end.  Like a real dead-end.  So, I was wondering if in the AI field you're considering policies that are much more general that seem completely unrelated AI, but are preconditioned for it to actually have a future in this, you know, world, otherwise the economy collapses obviously everything collapses.  I was thinking specifically, for instance, about universal basic income, how that could help AI.  And, more specifically, universal basic income, not necessarily as a transfer payment, but something much more radical or innovative and disruptive just like AI, for instance the self-generated crypto currency.  They have been attempted to doing this.  They're emerging right now and they're not dependent on controlling the State or having all these policies discussions, but just simply the people taking it up and saying, we're going to start using this because, you know, it's the only way. 

>> PANELIST:  I suppose the question was directed to one of the information that Professor Satish brought.  I think it is a concern that is shared by anyone who looks a little bit deeper, a little bit after tomorrow in AI.  We have it has been exactly one year that we have had a large conference in Brazil.  In the labor courts of all the future of work.  It's a very clear view.  Anyone who looks mid-to long-term to AI development has to talk seriously about some kind, some kind of basic income system.  So, it's in the view of any of us.

If it's such disruptive as a self-generated crypto currency, I assure

you it is not, at least not yet.  That's the first time I hear what can be the seed of a very nice idea.  But, definitely.  You don't talk -- as the gentleman said, it is an interdisciplinary thing.  So, we talk about deployment, implementation, first economic gains, but then in the long run what happens?  So, the basic income alternative is in the view of anyone who talks seriously mid-or long-term about AI.  The alternatives are not set yet.  That one might be food for thought. 

>> MODERATOR.

Mr. Satish, any additional word.

>> SATISH BABU:  Yeah, thank you for the question.  I think this is a matter of great concern, particularly to the developing world, where employment is a universal concern, not just for developing world, but some of us believe that this may impact Developing Countries asymmetry.  Developing Countries have not so far talked about the universal basic income.  This is the mode of concern of the developer.  So, we are very much delayed, and I think the Developing Countries of the world have to sit up and take notice of what is going to happen not only in the immediate term but definitely the medium term.  This is a real issue that you have raised.

Thank you very much. 

>> MODERATOR:  Any other words?  (?) another question. 

Okay, miss. 

>> Audience:  Hi there, my name is from Katie Watson from the U.S. and I work in the publish interest.  I'm interested in how you all think copyright and intellectual property law will need to change so simultaneously help this new technology to grow and to protect consumers, like Mr. (?) you were talking about somebody who is convicted a crime, not being able to understand why he was convicted, while at the same time protecting innovators so that they are incentivized to keep creating this technology. 

>> PANELIST:  Well, I think the IP is always the important issue, so what needs to change in order to foster, you know, the growth of AI.

One particular AI to think about it, there is lots of -- so, currently the AI current generation of AI very much relies on supervised learning that relies on be able to consumer training data, in particular for example, but actually cause questions, problems.  For example, while the very promise area, we foresee that AI may help tremendous piece, medicine, right, but then that's also an area where data is very sensitive.  How do you, even technological view, many of our colleagues believes any of our tools can really help to improve the diagnosis or prognosis of based on medical data or input, given that we have seen all the histories of meaning of the cases how they have been processed, how doctors practice.  But there has been, of course t barriers.  That is different industry, right, that with the medicine, medical industry, would allow the technological companies but there has been a discussion between Google and Harvard medical school try to collaborate.  Wasn't dead, but still lots of obstacles and barriers.

While the question is how are they going to use the data.  And, that raises question again.  Who owns the data?  And also, as well as the potential violation of privacy and support.  So, those are precisely the issues I think needs to be discussed in the future in order for AI to disseminate into broader fields, financial or medicine, or even other areas and so on.  So, that's a very good question.  I'm not an expert on that, but certainly we are aware of that.  That is going to be a very major -- if we don't resolve that, we will, you know, AI won't go that far. 

>> MODERATOR:  Okay.  Time is running out.  We may only have one question.  Who will ask?  Okay, miss.  Okay. 

>> Audience:  Hello.  I have a quick question.  Because, we mentioned the policy challenge of AI.  We see we have different stakeholders.  Someone from government and some from industry and also academy.  So, right now it's still at the very early age that AI development.  So, but we see lots of initiatives such as IEEE and IEU and also some Governments released some reports, like (?) mentioned, and universities, they're all doing all kinds of research.

So, what will be the good model in your mind that in the future, good model of governance and how does the stakeholders cooperate in the governance of AI in the future?  My question is for the (?) please. 

>> PANELIST:  Thank you very much again for the question.  Again, $1 million question.

You mean the environment where they should cooperate? 

>> The model. 

>> The stakeholder. 

>> Yeah.  What is a model.  How they foster it as multi

stakeholders. 

>> PANELIST:  I think the spaces we have to discuss these issues are enough for the time being.  I'm particularly worried with the IP issue that was raised that Professor (?) has just raised.  Compass shows clearly to me absolutely clearly that some delicate public uses of data processes through artificial intelligence cannot be proprietary.  Now, this is something that we've been holding.  We don't have a lot of support from the industry from this side, evidently, but there are in the academia, and in the research environment, there has been a growing voice that we're not advocating for the abolish meant of all (?) and intellectual property protection, but this case specifically shows us that some very delicate, very sensitive uses of public information that effects people's rights such intensely as is restriction of liberty cannot go unanswered through the obstacle or be behind the obstacle of intellectual property.  That is something that is absolutely clear for us how do we build that in which model do we implement that, it's still very difficult to state.

In the FDA for algorithms of paper from Andrew Tutt I think is the name of the lawyer from the U.S. Government, he proposes an interesting structure that is available in other fields of regulation that if you have to audit something like that, and this might be an interesting balancing model to the question our fellow here posed before, that we could try to set a structure where to protect the intellectual property, we just don't open everything about source, code, and data, but we constitute a panel of people, an accredited panel of people who will audit with a certain level of reserve, the information that is under auditing for public interest, and then we have to trust the position of that paper whether ethical, moral considerations and legal and regulatory considerations are in place.  So, again, your question is very good for a reason.  It doesn't have a proper answer right now, but I think the spaces we have to cooperate and start discussing are enough for now.

I would like to see more international cooperation, because the national initiatives that are raising in the countries I talked about might not address the problem from the broad perspective of border less work, which we have.  So, I would like to see more cooperation in that sense, but the models are not stated yet.

If anything else, but for the reason that Professor (?) the technology for every day applications is in its very early stages.  So, I think we have a lot to see in the deployment, implementation, and also in the environment for discussion. 

>> MODERATOR:  Thank you very much. 

>> PANELIST:  Thank you very much. 

>> MODERATOR:  I'm afraid I should to say we should stop here.  Many good idea and viewpoints contributed by the speakers and audience lead us to further develop and think over.  Their wisdom and insights will benefit the development of AI industry.  If you have more interest to this topic, I think we could exchange and discuss later after the workshop.  Once again, thank you very much for all of you joining this workshop.  Thank you for coming. 

>> PANELIST:  Thanks.  

>> MODERATOR:  Have a good day. 

>> PANELISTs:  Thanks very much.

(Panel concluded)