The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR:  Also, welcome to the panel.
     >> Thank you so much. 
     >> MODERATOR:  I'm a lawyer and online monitor. We have Amrita.  So let's begin with professor.  Can I invite you in the panel?

Before beginning on the content, let's have a round of applause for our panelists. 

[ APPLAUSE ]

So UNDP used famous down in model in Nepal.  So that was brought from Saudi Arabia back in suitcase and was speaking with various stakeholders and our journalist friend was trying to have some distance with her.  So somehow it was ‑‑ but it was already packed.  So how this is emerging from this kind of technologies, our basic idea of this workshop.  Rather than talking more from my side, I'd like to request to share insight on results on AI used in Africa.  I would like to request her to start with what were the measure findings of this research.  Thank you. 
     >> Thank you very much.  Good morning, everyone.  Just by the way, Sophia does not travel as a passenger in the plane.  She travels as luggage.  And I hope that airline companies will not get into this discussion. 

It's a good thing to be here and to speak to us about emerging technologies and how they affect us.  I'm from the Worldwide Web Foundation.  For those of us who know Tim Bennesley and the vision he has put across to the world.  Last Monday during the web summit when he was launching the principles for contract for the web, he said, and I quote, “initial thinking was that if we bring technology to human beings, they will do good things with it.” 

So it is still the vision that when new technologies come to human beings, we will do positive things.  We will do great things with it.  The web foundation has actually done two different studies, more than two. 

The regional one was AI in middle- and low-income countries across the world.  So we touched base with users and creators in Asia and Africa.  And this particular one in Africa.  There is another Sophia.  This one is Sophia bot in Kenya.  It's to give users responses on health issues especially reproductive health.  And if you know in Africa and many other places in the world sexuality is not something people like to discuss openly.  And sometimes that's on one hand. 

And then on the other hand, we don't have as many doctors that are available for us.  And so what this Sophia bot does is to be the doctor on sexuality.  She doesn't take leave, she's not going away and give all the health and reproductive advice to women.  So people who interviewed telling us rather than having to look for a doctor and schedule an interview.  So this is one way we can use this new technology in areas where human beings are still experiencing barriers and taboos and uneasiness in reproductive health. 

Let me talk to you about Nigeria.  Nigeria has something called Roadpiper helps you to navigate the traffic.  There's a few who have been to Legos.  It's over 20 million and the roads are not that easy to navigate.  There is the joke that you could see someone put the moose on the mustache, get a shaving, clean it up and still has not moved.  So you can do a lot of things in traffic.  So this is a solution that helps you navigate around traffic in Nigeria.  It picks data from Google maps and street maps and all of that and basically directs you.  But how you can do traffic better.  I cannot not talk about agriculture.  Agriculture is one way that Africa is using and can use more of Artificial Intelligence. 

I can maybe share ‑‑ helps agriculture people make decisions.  It takes images and analyze it with data and then advises people on how to do this.  There are quite a number of others in science, quite a number of good cases of use.  But generally, I can say that the areas where we would want to see more or using more in public service, we may come back to that later in agriculture in health.  These are very important applications.  Since we are here is language diversity.  If anyone has met an African who speaks only one language, it means African does not live in Africa.  There isn't any way an African can live in Africa and speak only one language.  I speak about five or six.  So language translation, language facilitation among Africans in Africa or outside of Africa is one way that Artificial Intelligence is seriously being used across Africa.

And I think that being here we would want to raise that.  Multi‑lingualism diversity is what we fight for.  If technology should be forever one, multi‑lingualism and diversity should be those that are near to our hearts.  So, in general, Artificial Intelligence is being used in Africa.  It's still being studied and I'm happy to be on this panel to hear more from everyone as we're going to the analysis.  My five minutes are done. 
     >> MODERATOR:  Thank you.  Now, let's move to professor Chuang.  She has PowerPoint presentation. 
     >> DR. LIU CHUANG:  Very happy to be here.  This term is advantageous and the challenges.  It's brand new.  AI is coming in China.  This is very hot topic now.  And not only is the commerce, everything.  I take the examples here how to use AI in some cases.  One is AI in earthquake monitoring.  So China in the earthquake monitoring system, this AI for reporting the earthquake in the world.  Most in four minutes, three minutes and they got the report.  And then in Haiti.  So each earthquake happened and only a few second and then report automatically comes.  So this is use for disaster and rescue and then for the decision making. 

And then this another case is AI.  And they are bigger company.  And the car also try now.  And I think it's beginning. 

Another case is a restaurant.  AI in restaurant.  So here, everything is automatically.  So come in and nobody there.  So you can order and then the service comes to you.  Will serve it for you. 

And another one is the ‑‑ China has three AI port in China.  AI port of China is operated in July of last year.  And then 350 contenders.  So all this computer works.  No people. 

And another one in Shanghai port.  This is the biggest AI port of the world.  Launched in December last year.  It reach 42 million containers per year in the coming five years.  So it can vary a lot.  So AI is coming in China.  So Universities, students and so on.  They try to learn.  AI applications become a hot topic.  Almost all feared.  All AI related subject also come into heart including big data, cloud computing and AI science and technology.  So because it's AI, everything AI and the people lost job.  China has huge people but people lost job and couldn't find.  So this is a big problem.  And also the less knowledge and skill for the new position.  People have new job but no skill or any knowledge about it. 

So how to get the solution.  Solution is go to education.  So in the University, no subject classes.  So this is a problem.  So come and ask University to open this class but no teacher.  But okay I tried to teach no materials, no books.  So everything is new.  Less reading materials and the books and the new classes should be more other things and the money challenges.  But, of course, this is opportunity.  But now, this is China now.  Thank you. 
     >> MODERATOR:  Thank you very much.  Now, let's move to a small tiny economy, Nepal.  My own country.  It's very in between China and India.  We have very young independents, young technical people who has been trying to mar their presence in AI technology.  Can you say your presentation in five minutes?

This is your initial presentation.  I'll come back after that. 
     >> BIKASH GURUNG:  Sure.  I can hear you.  Can you hear me too?
     >> MODERATOR:  Yes.  We're sharing your slides as well. 
     >> BIKASH GURUNG:  Thank you for providing me this opportunity so that I can explain how Nepal ‑‑
     >> MODERATOR:  Let's wait.  We're making your slides. 

Go ahead. 
     >> BIKASH GURUNG:  Okay.  Is my slide on?

I can see my face here. 
     >> MODERATOR:  Let's go on. 
     >> BIKASH GURUNG:  Okay.  Hello, everybody.  So this is Bikash.  Today, I'm going to talk about what are the AI's impact, the challenges we are facing and what are the strategies to make AI feasible for us. 

So if we talk about the progress in Nepal how AI has been here, there has been a company that has been leading AI revolution and follow up is done by lots of community organizations like AI for development, pilot technologies, cloud factories.

There has been lots of training going on to build the capacity.  And on daily life level, using AI technology so they can provide service for 24 hours.  And there has been technology that has been built for visually impaired.  There is another technology that has been built which can provide smart data‑driven multi-cultural intelligent system.  Also, emergency response, drone delivery system, robot restaurant.  So there are lots of them.  Regarding how AI has impacted on Nepal, if we talk about a city, then it seems like they are technology driven.  When we talk about Nepal, it has created ‑‑

So lots of challenges have emerged as well as people has been saying development in equality.  Since we have been a lager in terms of AI robotics, the development has been slowly progressing.  As you can imagine, all the services that has been coming up to the people in terms of technology, it's built by China and U.S.

So the economy that could be generated is not for the emerging economy like Nepal.  There has been a knowledge gap.  This technology is not built here.  People are trying so hard to get up on the technology.  Lack of access to technology.  Even also the employment work.  You don't have a good human resource that could support the AI based project development. 

And also, we are basing biased algorithms that do not lead us to what Nepal's choice is.  You have to face the developed country prospects.  So that has been one of the challenges. 

If we talk about challenges for private sector, there has been an issue of accountability.  The employees that did multi-national companies operate and you cannot take legal charges against them.  They have to follow the rules.  But they are not doing it.  When we try to build them, their competency is not up to the level.  And also the turn over.  They are switching from one company to another.  You have a contract for two years and you turn from one company to another. 

For public sector, it's an ethical issue.  You don't know if the person is providing you the right solution.  If they are doing a good job or a wrong one.  So that has been one of the issues.  When we talk about employee as well, it's a monitor job.  From the community sector leaders with the basic knowledge, you cannot lead high-end technology with basic knowledge.  But they are doing it.  And lack of data ability.  In terms of government sector, they are also not knowing about this technology.  And there are also kind of scared because of unemployment that will create due to the AI.  So they are getting ready to implement this technology.  So when we talk about legal challenges, private sector is not always about the common plans and policies.  You don't know what kind of complication comes from government.  Government doesn't know what kind of legal rules they can implement.  For public sector, substantiating base of AI companies are initiated.  They don't have a normal company.  So they are facing hustles to go from one government agency to another. 

So how can we possibly use these kinds of challenges?

From government perspective, what happens so far after discussing with the public sector.  Public private partnerships would help a lot.  So this will create ethical and legal framework which will support and reduce the challenges.  Government's focus AI program.  I had a meeting with the technology of ministry.  What if we could come up with framework and that would be adapted here as well.  If we could focus on the government AI program, then it will also help reduce.  So China has already adapted ministry for AI.  So there would be department in government ‑‑
     >> MODERATOR:  Brief. 
     >> BIKASH GURUNG:  In terms of private, perfect implementation of legal framework would support them.  Ethical knowledge transfer to each and every private companies, this could also work.  Also, to the employees and technology and knowledge transfer as well.  Like technology are being built in China and U.S. so if it could be transferred here as well, that would support ethical performance. 

Enhancement based AI could also let public feel that okay this is an opportunity for them.  And also access and availability to the public to resources and technology.  And one of the things that Nepal is lagging on is AI negotiation.  So AI based education for the community would also work. 
>> MODERATOR:  Thank you very much.  I'll come back to you on next round.  Now, I'll go to Professor Park.  I would like to request we have the use of AI technology and China's perspective.  Developed country in AI and perspective from Nepal.  So what you see the legal issues. 

     >> Those of you who cannot find my name in the program, you are not doing anything wrong.  I was not planning to speak here.  I was subbed in by my good friend.  Because some other panelist could not make it due to a visa problem, I believe, which shows another big divide between the south and north because it's usually the panelists from the south that have this Visa problem at the last minute. 

I just put up there just so you know my name.  So I think there are largely four different sets of ethical issues rising out of AI.  One is what I would call andromorphic.  What distinguishes AI is it challenges the concept of humanness.  So whether AI is considered a human being or not, we change our analysis of freedom of speech and also privacy.  Until a few years ago, Google used to have this service where machine will read the contents of your email and attach related advertising links. 

So if I write an email to my friend saying let's go to wine, the email will include links to best wine house nearby.  Did Google did that believing that having machine read the contents of email does not infringe privacy?  And society didn't know how to respond to that either.  The issue went away unresolved when Google stopped the service.  Google is still having machine read the DNA fingerprints of video files attached to emails.  And when they identify child pornography, they actually pulled it to the police, the sender of that email.  So having machine in there in your private space and having an object in the private space, does that violate your privacy if there is no human being actually absorbing that data produced by that object?

If there's a dog walking into the sound booth, do you consider it a privacy violation?  If there's a person walking in, you definitely see it as a privacy violation.  How about a bar of soap in your shower booth?

Similar issues arise out of freedom of speech depending whether you see AI as a human being or not.  The liability safe harbors have been the legal tools to encourage and promote the growth of civic space on the internet.  Now, one of the reasons that we promote it this liability safe harbor was believed that if we hold intermediaries, they'll start preapproving the contents posted on their services.  And prior censorship in the human rights field is a no‑no.  But how about that preapproval is not done by human beings but done by machine.  What if Google or Facebook comes up with AI technology that filters out fake news?

And if they have the technology then now it takes away the legal argument for having safe harbor.  So I'm sure the policy people in those companies are in really difficult situation.  The more capability they develop for detecting illegal content online by machine learning will actually strengthen the reason for taking away the safe harbor for liability.  I'm having a free speech advocate for many years.  I cannot imagine a world without intermediary safe harbor.  That's one set of issues.  Arises out of this challenge that's this dilemma.  Can you go to slide number 17?

I only have five minutes, so this is the only way I can keep within time.  So that's one set of issues.  The second set of issues is economic issues.  We already talked about it.  So just like in capitalism, the ones that had capital could exploit other people making people depend on the use of the capital to create value and as robots start providing label that replace human label, there will be more inequality.  I keep adding one because I talked about that already.  The third is algorithmic. 

So people know banks will use algorithm so the poor people will have less chance getting loans.  If you think about it, that's not AI problem.  It's a human problem.  The loan officers will also reject applications from poor people.  And AI trained on that data will continue to do that.  Not really AI problem.  It's intensifying human bias through automation. 

The fourth issue, the fourth ethical challenge of AI comes from data monopoly.  AI is like windows.  It can be copied.  A lot of people can have copies of AI.  What it makes or breaks is whether you have the training data.  Who is building the silos of training data?  That will also decide resource allocation and resource distribution.  So these are the challenges.  My answers to each of the challenges.  I will get to it the next round. 
     >> MODERATOR:  Thank you.  One question.  Do you think we need any legislative development on governance of AI?

What do you think?

It's just technology.  We don't need specifically new ‑‑ I mean, do you want to legislate AI technology or keep going in the development of AI without any new AI governance from legal perspective?
     >>  Well, kind of coaxing me into disclosing my answers early.  Out of the four challenges, I think the real challenge is the fourth one.  Data monopoly.  I think on that issue I wish there are more legislative efforts not to force people back into this privacy where people don't share data with anybody.  But I wish there are more legislative initiatives that encourage people to share more data but equitably.  If there are people being put into silo without compensation, maybe we can make laws that encourage those silos to be shared with a lot of other people so they can also use that data for training AI so they can also benefit into making living or researching.  So that's kind of a line of thought I would call data socialism.  I know the socialism has a bad name in other parts of the world.  So no problem saying it. 
     >> MODERATOR:  You want to say something?
     >>  Yes, I do.  Who remember when television was the newest technology in town?

Good.  Everyone here remembers when mobile telephone was the big deal?  Who remembers that?

Right.  Your children will have to visit a land line in the museum.  Some of us remember when telephone itself was the new technology.  I don't think we have a problem with Artificial Intelligence as a technology.  Ultimately, it's a human issue.  Still what human beings do with technology.  In 20 years, we'll be having new emerging technologies and you can be sure it's not going to be Artificial Intelligence.  It's going to be something else.  I think some time ago we're talking about Nano technology.  I think we've lived through all of that. 

Now, should we be regulating individuals, human being, human activity?

Yes, there are biases.  One of the questions I wanted to ask you is who is developing Artificial Intelligence technology for who?

The Silicon Valley white male then the African female person might be in danger.  Mentally and technology.  One of the principles of the contract for the web is that apart from respecting personal data is that we develop technologies that support the best in humanity.  You cannot sit down to write ABC what you want to regulate.  We can give clear views and clear directives about what we know is detrimental and let developers know that.  You have young developers that are 20‑21 years of age.  They've always had smart phones that could talk.  Always had enough coffee, enough food, always had enough of most of the things you talk about.  When you have such people developing technology for the 70‑year‑old farmer in Nepal, Nigeria and Kenya, they don't think the same way.  That is why you have to put down a kind of regulation, directives so people know whether you are the one developing the technology, the people you are using it will be defriend from you and focus on the needs of human being.  Men put ‑‑ men develop code, ego is a big part of it.  But when women develop code, solving problems.  Men still want to solve problems but want to leave their signature in the source code. 

So, guys, you need to go down on the ego and let's code for what solves problems for everyone in every language.  That is also very important. 
     >> MODERATOR:  Thank you.  Like to respond to the regulation perspective AI.
     >> DR. LIU CHUANG:  Because this is new, we are changing a whole lot about the society.  Economics and social actions for everybody, I think.  So this is new older for the society.  We think about the goals.  Should link this.  We fear the fundamental scientific foundations should be very important.  Technology should be very important.  So for developing countries, I think not only the government but all the sectors, education, research and the company, private sector s and the benefit of people.  We need thinking about this.  Artificial Intelligence there are something good.  Something not good.  So we need thinking about this in advantage.  We thinking from the United Nations, the community.  So I think it's true we thinking about the principles for the AI development.  So use this advantages and the professor to create a new ‑‑ and it's very clear.  So in this case, developing countries and should pay more attention about the education.  So otherwise, always could catch this advantage.  So this my opinion about this. 
     >> MODERATOR:  Thank you.  I'd like to open the floor.  One, two, three.  I'll take three first.  Please, your name for the record. 
     >> AUDIENCE:   [Inaudible]. Thank you for the opportunity.  I'm a student.  This is very interesting presentation from all of you.  Because there are discussion here about the legal and ethics.  So before we decide to regulate and how we regulate, from the law perspective, can you give us where we are the process of the independency of AI in the future?

We need to have clear goal relationship, so we can distinguish to what extent is the obligation, to what extent is the rights.  And then maybe we can refer to the present case where, for example, there is, I don't know, accident or any misuse in processing.  And I want to know more about what has been done in China to get job challenge which produce by AI.  We understood the population of China is the biggest in one country around the world.  And I believe that's one of the most factor in China.  It's how to ensure that people are in demand.  So how do you do that?

And last but not least, do you think that the face of AI technology now the technology is already spread out?

Or is it still concentrated in some area?

Even the internet is now already being used integrated around the world.  Still it's not 100%.  I don't know the exact percentage of how many people who use the internet.  I don't think it's maybe 80%.  I heard like just 50% or even below.  If the technology kept increasing, people left behind never get the chance, the rich become richer and the poor become poorer. 
     >> AUDIENCE:   My name is Steve.  Thank you to the panelists.  I wanted to get your thoughts on the role of the policy in protecting jobs.  And I think there's two ways we can look at this.  The one is that AI is happening.  It will continue to happen, and we need to keep feeding it and push it as far as we can.  In China then it falls to education to prepare humans to live in this AI world.  And I worry we put too much on education.  We're asking too much because obviously humans can't compete from AI.  Humans can't compete with simple excel in terms of running numbers and adding up and running formulas.  And we don't need to.  We've learned to work well with excel.  If you are not account, that's your ultimate tool.  If we only say that AI is happening, and we don't actually take a step back, education may not be able to compete.  And so what are your thoughts in China, I'd be interested, perhaps others, on policy stepping in and saying let's hold it.  Thank you. 
     >> AUDIENCE:   Thank you.  I'm Dale.  My question is to the colleague from Africa.  You said in Africa, speaking several languages is quite common and you said that Artificial Intelligence can help multi‑lingualists.  I think this is important.  But what are you thinking about?

Are you thinking about automatic translations or other devices for improving and encouraging multi‑lingualism?

Thank you. 
     >>  I'll be brief on one of the questions from UNICEF.  Basic income proposal.  I think we already went through evolution.  Many people excluded from the benefits of the capitalism.  Many European countries turn to welfare socialism.  And now AI will replace capital as it's a force of concentrating wealth.  Well, then, we will intensify welfare efforts.  So as my slide said, education and welfare.  And, also, you will create other jobs.  It will create new jobs that didn't exist.  In a pre‑AI.  If you look at change of jobs from industry age into the next revolution, new jobs are created as more people are chased out of exploitive working conditions and wages. 

I think the answer is there.  The problem is execution.  And also just, personally, as I said before, I was put in last minute to fill the empty panelist spot so I had a prior engagement so I have to leave at 10:45. 

     >> There's already a partnership for AI.  It's been created.  It's got Amazon, Facebook, Google, Microsoft, IBM, Apple.  That partnership already exists.  There is a kind of thing.  Yes, we're at a 50/50 connection balance at this moment.  And that's why we launch the high stack for the web campaign to be able to encourage everyone.  Please sign up to the principles to the contract for the web if you have not done so.  Happy to talk about it.  We cannot be doing AI for half the world when the other part is offline.  That is one of the things I'm passionate about at the web foundation.  In Africa or everywhere in the world, I don't know who cares about AI as Artificial Intelligence.  We care it facilitates life.  We want to see it in devices and public service and making life easy. 

So the translation may not come out in front of my phone.  The work has been done somewhere else.  The work could be done in announcements, at the health ‑‑ when I want to speak to a doctor.  The technology would have already happened.  So my point here is that human beings don't want to speak to other human beings.  I still want to go to a bank and if something is wrong, I want to be able to yell at somebody, kind of; right?

That's the human part of me.  My money's here, I want my money.  Why are you making my life difficult?

Sorry, could you say that again.  Sorry, this is the UN.  Shouldn't be saying that.  The technology comes at the back end.  So it's not something about Africa.  It's the whole world over. 
     >> DR. LIU CHUANG:  Besides education in China, it's legislation to pay more attention on this.  I think the formal legal above the AI is not coming.  One is big data.  Second is human behavior.  Third one is cloud computing or e‑computing.  And third one is high‑speed communication.  But China had a very serious legal system managing the four different issues.  So I think that's no ‑‑ of course, there is some risk about the AI development.  But I don't think China has a big trouble.  So the different session can work together to propose how to dealing with this.  Right now, everybody pay more attention of the advantages to help themselves.  Economic, education, research.  And other things that go this way. 

This is international communication and to interject and in the country.  So I believe it's international collaboration is also very important for China.  So I think it's opportunity for everybody.  Thank you. 
     >> MODERATOR:  Thank you.  Any more questions there?

No. 

So considering the limited time of speakers, I'd like to thank all of you in the dais and all of you participating in this session and supporting technical team.  And I would like to thank the professor to come here and organize this workshop.  Thank you very much.