You are here

IGF 2019 – Day 0 – Saal Europa – Pre-Event 22 Promise Of Safety And Security In The Digital World - RAW

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> This was done by DARPA.  That's the research facility for the U.S. Army and they trained a system with pictures of tanks and in the second picture that you see in the center, it was manipulated by, they added a little bit of green and yellow hues there and by only changing a couple of pixels in this picture here, it threw off the AI system and it classified the picture as a British ambulance.

Obviously, that's a horrible mistake but it just goes to show that it's very, very important to have a robust AI system.  Otherwise, there can be like grave consequences.  Here are a couple of examples.

So, it's not only about, you know, visual recognition or recognizing pictures or videos.  It's also about recognizing audio data or tabular data, voice recognition stuff, and all that stuff because it may, in the background without you or anyone, any user noticing, it might attack the AI whether it's in the smart home or in a drone or just in any other computer machine learning system.

And that's why it is very, it's just very, very important to increase the robustness and ensure a system's robustness and it's still, it's absolutely shocking to me that it's not, that this is not a bigger issue.

Basically, there are three types of quality parameters when we talk about AI quality.  Number one is robustness.  And this is a, well, these are two pictures of an AI system that we were able to hack.  So, in Matt, we don't, we do not create neural networks, we don't establish our own AI systems, we get access to other organizations' AI systems and we just take them apart.

We hack them and we're always able to get into the system and we're always able to either make things disappear or add things or just show the user that their system is not safe.

In this case, this was a camera that was deployed in a car of a very large OEM and the picture on the top is the AI system in operation, operating properly, it recognizes the streets, the signs, and the pedestrians.

And then, we were able to hack into it and we just make the pedestrians disappear.  Now, that did not go over well.  They did not enjoy that when we did that.  But, on the other hand, they were very happy because it just showed them that there's a group of people who are able to get into our system.

In a very short amount of time and it's just a little scary because if the car doesn't recognize the right things, it's just bound to cause an accident.  Number 2 is comprehensivability and that is about investigating the decision tree.  Why is it making this decision instead of that decision?  Are the decisions being made in a transparent way?  What's the coding behind all of this and does the company want to be held accountable for the decisions being made by their system so we do have a couple of mechanisms to make sure that the decisions that the system made is, basically, in sync with what the company wanteds it to make but also, if you use this for analyzing somebody's credit score and your credit score is lower than your neighbor's credit score and you want to challenge this, the company who rated your credit score has to allow you access to the data or at least to the decision making process and this is why that is also important.  Number three is the functionality of this system.  So, when we talk about a, we usually talk about machine learning components.  With machine learning, you train a system with, you know, hopefully properly labeled data.  But, in the process, in the course of its lifetime, it keeps learning and if you, if it encounters certain data with certain results, it may end up running its own agenda and it might end up practicing fraudulent behavior, which, obviously, b the company may not want or does not want.

So, you have to make sure that the functionality of the system is actually what you want it to do and you have to take care of its free of any bias, any racial or agenda discrimination.  And it's operated in a fair way.

Without trying to answer the question, what's fair, and all that.  This is a picture of the software that we've developed.  This is a bit of an outdated software because we're working on it every day.  We've got patents on it.  We're able to use it or we've been able to use it over the past 12 to 18 months but our goal is to actually maker it use year friendly and deploy it in the Cloud and have a Cloud service and so that companies who want to check their AI quality can just plug it into this program that will be called eight kit and then you can run your own tests and see if you're AI system is robust and if it's doing what it's supposed to do.

We put together an industry consortium, let me say it this way.  And we put together a technical standard and the heart of this standard is what's picture there had in the center.  It's the AI life cycle and we got together with larger organizations so this is not just a group of start‑up dudes who decided that this is the right way of looking at it.  We got together with Microsoft and SAP and Bosch and continental and a bunch of others, a bunch of research organizations as well.  And then we decided that, you know, this is the way to basically look at it.  So, you have to look at the model.  You have to look at the data.  The training data.  The input data.  The output data.  You have to look at the platform.  So, the actual hardware of the car or whenever you're going to implement your system and the environment so this is where we start our analysis.  So, if a company wants us to check the hardware of their system, we always start with the environment.  We can do credit models.  We can say, what are the likely scenarios this system is about to encounter once it's deployed.  And then we simulate these scenarios in the software and we attack the AI system with those simulated situations.

I'm not sure, let me see, yes.  So, in the literature, there's, there are a handful of attacks.  There are like 70 hand made tailored adversarial examples or attacks.  And so, then, what these companies do is, they have an AI system, they create an attack, and they launch the attack on the system and had then they fix the system right there.  And then, the system is basically protected against that one particular attack.

And so, this is what this graph on the left is supposed to show.  Like, you know, you fix your system individually, but, of course, if you think about a traffic situation, there are millions of scenarios that can pose a threat to a car or to a pedestrian.  We looked at all of these individual attacks, s we took them apart and we realized there are certain mathematical building blocks around each attack and we took them apart and reassembled them and added more attacks.

So, we looked at like the threat model analysis, what can happen to any given system.  Is there fog, is there snow?  Is there rain?  Are there shadows on the street?  All of that stuff and we simulated this mathematically and we ended up with roughly 10 million of these adversarial attacks and just to ensure that if you have an AI system and we hammer it with just millions of attacks, we will find weak spots and we can identify the weak spots and also make sure that it doesn't happen in the future.

So, we build a firewall around it in order to ensure that the system is safe.

This is part of ‑‑ I was, I'm surprised by this slide right now.  But, it's just a vague summary of what we did in the past and why the whole discussion about AI safety is just very, very important to us but also important to basically everyone else who wants to implement, what we do and why we think safety and security is a major, major issue when we talk about AI.  Thank you

(applause)

>> Okay. Thank you.  You want to take a seat?  We have half an hour now left so, I might skip the first part that I have planned.

>> Well, I think, any questions?  I think I covered a lot of ground.  I don't know if there are, I mean, if you have any questions about, you know, the ten, 15, 20 slides, I forgot.  You can feel free to ask.

>> I've got a question about, I guess, the marketplace, when you say there's no one else out there doing that, have you also considered existing intort firms that are developing their own in‑house AI improvement mechanisms?

>> Sort of.  There is one company in Silicon Valley also talking about robustness.  We're not necessarily talking about intortor improving intort.  I know there are a lot of organizations that talk about that.  But, we are specifically talking about robustness and specifically gearing our service toward companies that want to deploy AI systems in cars or in medical devices or in the aircraft industry where it's really absolutely necessary that the systems are running in a safe way and that they cannot be attacked easily.

So, that's why I think, you know, they, there's just no one else who is as focused on robustness at this points.  The other company is called Calypso.  They're in Silicon Valley.  They have an awesome website.  We do not.  That's where it ends.  It's a very cool website.  They talk about robust things.  But they do not have the tools that we have.

>> Okay. One more questions and then maybe the others will get afterwards so we can get in the discussion.

>> Thank you.  How easy is it for human beings to comprehend how a particular system, machine learning system, arrived at a particular decision, especially considering that the general data regulation requires that any decision should be interpretable so a human being can explain how the decision was made.  How easy is it to those decisions and be able to say how they were reached?

>> Impossible.  I mean, it's a very short answer.  It's impossible.  That's what I was saying in the beginning.  You cannot check a system's quality by code review.  You have to have a tool that can basically speak the language of the AI that can take it apart in a sense.  This is also one of the reasons why it is absolutely shocking to me that we don't talk about quality because we as humans do not have the capacity, cognitive capacity to look at these algorithms and at these neural networks of, I don't know, 20 million layers and find errors.

It's just impossible.  And that's why we need tools that we can rely on, and the tools, then, you know, have to really check a system's robustness and functionality before we deploy it.

I mean, you know, that's the whole point.  We can't.  You can't just look at a system and then decide, oh, this, that thing is safe.  It's just impossible.

And if you look at, you know, the handful of incidents that has happened to the Tesla drivers, Tesla refused to let people look into the black box, and see why did the car make this turn, that turn?  Why did it change the lane here, and not there?  Why did it run over a runner?  They're not going to let you look into the thing but it should be in their vital interest to, you know, run a couple of safety checks.

And they actually, so, if you own a Tesla and you park it at night and you enter the car the next day.  There are a handful of software updates.

And you, as a driver, nobody asks you, do you want these updates or not?  Nobody runs a test to see if the quality of those software updates is okay.

It's just not done.  And it's crazy.  I mean, it's just weird.

>> Okay. So, one of my questions would have been, so, you guess you will have work for the next 50 years, or are we done at Montgomery County and say, now, we got to a solution and now it's safe ‑‑ done at some point and say, now we got to a solution and now it's safe?

>> We had an appointment with Google two weeks ago and when we talked about this and it was a very, it was a fairly short conversation.  And after two minutes, they said, you will have enough work for the next 50 years.

So, we did not speak before this so it's funny that you mentioned this.  They said, you will not run out of work because this is going to be the number one issue for the next couple of years.

But, it's just a little bit difficult because there are so many different domains of where you implement your AI system and we're not the expert in like the medical domain or the expert in the aerospace domain so we have to rely on the expertise of others.

I was showing a picture of Volkswagen, so, they're working with us because they know, they know all about cars.  We don't understand cars.

But, in order to get into all the other domains, we have to work with other organizations, that are active in these other domains.

And it should be in their own interest to, you know, IP sure safety of their systems.

>> And I have one last question for you, and then I would like to open the discussion to you, basically, looking at the technical dimension, but, when we talk about a, mostly, the yeah, ethical dimension comes into discussion.  Would you say it's possible to disconnect these two dimensions or how do you deal with it in your work?  So, is, I mean, you said that you talked about fairness in functionality.  You wouldn't want to say what is fair and unfair.  But would you really want to disconnect these two dimensions?

>> Yeah, I don't know.  I guess you can make sure that your system is trained in a certain way so it doesn't have a certain bias in it but when it comes to these typical examples that were run by MIT, if a car, if an accident is unavoidable, should the car run over a kid or run against a tree.  I guess these are ethical questions that I don't know how you can put these answers into a neural network.  I don't know how to connect those two dots.  Yeah.  Sorry.

>> Okay. Thanks.  So, now I wish to open the discussion.  And I would like to start with a question.  In Germany, we are having a discussion right now whether it should be, whether there should be a labeling obligation for products that use AI.

So, whenever there is some AI used in the product in the application, it should be stated somewhere.  So, just a quick check with you.  Who would appreciate this?  Or who would say, ‑‑ so, it's not.

>> That's only five out of, I don't know, 30 people?

>> So, the others are tired or would say, I don't see the point in knowing when AI is used?

>> There's a famous Google video, well, it's actually just an audio thing, where, a person is calling a restaurant to make a reservation.  And I don't know how many people have heard about this.  I guess no one.  But, so, this is like one out of three.  So, it's just an audio tape and it's two male voices, I believe.  And one is saying, I want to reserve a table at your restaurant, Saturday night, 7:00 PM, five people for two hours.

And the person on the other end of the line is like asking the right questions, you know, what time?  How many people?  Vegetarians, blah blah blah.  And it's just a friendly chat.  I don't know, it takes like a minute or something.

And then, at the end, Google said, okay, this was our latest AI thing.  And it just gave me the chills because it was really, like you couldn't tell that this was like a computer.  It was pretty impressive.

And you know, as a private person, I don't know if I would want that.  You know, ‑‑

>> Want to know?

>> Yeah, no.  If, like, if I want to have these interactions without knowing that I'm talking to a robot, or like, an ML, really.  So, yeah, I'm a little torn but I'm only torn because I'm working for an AI company.  But, I do think most people are a little not too happy that there will be these interactions in the future.

>> So, maybe we stay with making a restaurant reservation.  Would that make a difference.  Would you want, whenever you do a survey or whenever you call a hotline and they say, this call is being recorded if you don't want this, please tell me now.  That would be a way in AI in these situations to have the choice?

>> It's unfair because we're the only ones with microphones.

>> No, I'll get to you whenever you.

>> Thank you very much.  In Germany, we are very good about creating labels for everything.  We did get 20 different labels.  Some of them are real labels.  Some of them are just labels of some retailers and things like that.  And things like AI, there is no real cooperation.  It's not like, this is AI, and this is not.  Making a phone call with a restaurant and having it voiced that is nearly human, this is AI, right?  But, if a user search engine and it's just somehow guessing based on my typos what is the right thing, this is also AI and actually, this is the part of AI that I really like.

So, labeling the things, it's, I don't think that this is the right approach.  People should get used to it like they get utility to cars, tealike they get used to internet, like they get used to phone calls 200 years ago.  So.

>> Okay. Thank you.  I would say maybe to give you an answer to this, if people don't inia that AI is used, how would they get used to it.  If you drive a car you know you drive a car and then you can get used to it whenever you do it.  But if you don't know whenever ais used and that's how today it is, that could be a problem.

>> Actually, there is a lot of electricity in the car right now.  Most of the people are using it already, things that keep the car in line.  This is not really AI.  Just a couple of sensors, measuring some distance and stuff like that.  And things like this bring more benefits to the human being than avoiding things.  There was a story that was brought just a year ago or so.  A car driver said, was in the Court because he hit the buy cycle and he was just like ‑‑ bicycle and he was like, no, I wasn't speeding.  And they go to the car and they check the computer in the car and they realized this guy was driving 70 kilometers per hour in the city.  So, that was the reason why he got to jail.  This is the reason we already have it will in cars.  It's not really AI.  But things like this are already in cars and most people aren't aware that every single meter of your car you drive is measured today already without AI.

>> There were two comments right there.

>> Yep.

>> I'm MP from DRC.  So, me, on the part of knowing that AI is employed or not, I think we should be informed.  Because as a customer, we have to make a decision.  If I want to be served by AI, if I have the choice to choose either AI or human, or if I don't have the choice but at least I should be informed, because AI is not only the car, it will go on all the field.  Maybe tomorrow, you go to the hospital, and the surgery been AI.  So, you either choose to be, to go to surgery with a normal doctor, human doctor, or AI doctor.

You have to choose that one.  I think making it clear that this is AI product should be very clear, precise.

And there's also a legal concern on that.  Because AI, it's still a machine.  A machine can have a bug.  It can be hacked.

So, if that machine makes a mistake, will we face the justice, will it be the creator of that machine?  The company of that machine?  Because at the end, that machine will not put it in jail.  You can only shut it off.

So, we need to sort of see the legal part of AI.

>> Okay. Thanks.  I would, yeah, I'll get to you.  I'd say we should definitely talk about this legal aspect, but maybe not now.  So, I would like to focus on this first part, so, did we need a label?  And as I understood what you said, you would say, yes, the consumer needs to be informed and needs to have the choice whether to use this AI product or a human in the background, so to say.

>> Hi, I'm Klaus.  When the discussion started, I thought about the label made in Germany, which when it was introduced was to scare people off from buying product.  However, later within time, people appreciated the quality.  Though, I think being clear about that AI b is being used, it should be mentioned.  Could be labeled.  And maybe in the future, even a certain trusted AI or something like this could be developed to ensure a certain quality and being, going along with code of conduct or human rights and certain things that are implemented into it.

>> And this, sorry, just to comment on that for just a second.  That is something we're working on.  We're working on, well, not working on it, but we're talking to organizations that do certifications on, you know, just like checks your car and elevators and God knows what else.

We are trying to get to a point where we establish technical standards so that we can define certain requirements in AI should meet.  And then, there should be like a third party that can check your system with their test, with their standards and you can kind of b get C level approval and as a company, you can advertise to that.  You can say to your customers, look, our AI system has passed all of these tests and it's safe and always updated, et cetera.

This is not done today but I think that should be the future.  Like I said, we are talking to a certain certification companies in order to get that done and the government is talking about it, actually.  At least the German government is talking about it.  I'm not sure about other companies but I think that is the future and that has to happen at some point.  Anyway.  Sorry.

>> Hi, good evening.  My name is Gabon Swift.  I work with Lat Net which is regional internet from the Caribbean.  But I'm just entirely interested in this from a personal perspective and I wanted to play devil's advocate in asking why is it important in certain situations to know whether you're dealing with AI or not?  I mean, given that we have the equality assurance issues dealt with, if I'm making a reservation at a restaurant, maybe in that scenario, and in other commercial context, it may be better to be served by AI because there will be less of the hesitation.  There will be better solution finding from an AI than from a physical person.  Now, I think there's a distinction to be made when you think about it from an ethical standpoint.

So, of course, aligned to the type of information or data you shared.  So, sensitive data dealing with your medical records, your political beliefs, those sort of things you should know.  So, there's a need to have a label in that sense that if you're sharing this type of data, you're dealing with an AI person but in the majority of other cases, buying something, I need support from a hotline, making a reservation at the hotel, why should it matter?  I think you might actually get a better level of service.  You might actually have a hot of things that are solved that a human won't be able so solve and I think you would just be happy.  If I were a company using it I would just have it deployed to make sure my ratings and my reviews go up.

>> Okay. Thank you.  Okay. We have another.

>> I'm not sure this current question has to do about AI and Democratic services but I'm from Belarus and we're entering another phase of discussions supporting integration are Russia and Russia has an ambitious plan to use AI for improving economic behavior of its citizens and for monitoring more of its activities.

Social activities.  And I just heard about research at the University of hum bottle in Berlin that basically results in AI algorithms being able to determine a person's sexual orientation and many other demographic parameters from just two or three sentences of text or just a short bit of text written by this person.

So, if the trust in AI is basically increasing, that kind of gives the permission for the totalitarian governments to use more AI technologies to be watching more.  So, if the AI is better served people will trust it more and give more information.  I don't know.  Is there a way to avoid this?  I'm not sure.

>> One, you don't disagree with the raft two comments, not at all.  As long as it's not safety critical, it doesn't really matter.  But the other thing, this is a whole nother can of worms we're about to hope is what's going to happen to all these jobs if you don't have any call centers anymore.  There are millions of jobs involved in this and they are very low paying jobs.

What's going to happen to all of these jobs.  This isn't safety related.  But, it's just another part of the discussion that, you know, is bound to happen when governments talk about, what are we doing with millions of people that are now unemployed.  So, it's, you know, it's not only reduced to like safety issues, but it's also about like, you know, social, economical discussions that I think every, you know, every company or every country, rather, has to address sooner or later.

>> Thank you very much.  It's a very interesting discussion we're having, I think.  I want to come whack to something you just said.  My name is Clisan, by the way.  I'm with the government of Canada.  You talked about certification to verify that a product is reliable.  But, in this case of AI, you described how you start your car in the morning ; Tesla has sent an update.

So, how does the certification work in the case like that?  Because the AI evolves completely.  When you start doing machine learning, it learns from itself so how do you certify that it's still, you know, like what's the certification process there?  I just don't technically understand how you keep doing that because technically, when you get to machine learning, you have to do it every instant that a new bit of data enters into the system.

So, if you can elaborate on that, I'd be very happy to hear it.

>> That's a very, very good question.  And obviously, you cannot do it in realtime because it takes, I mean, it takes gazillions of servers to run all these operations.  You know, to check a system's robustness.  But you do have to check it on a regular basis.  Now, if you, you know, just like you have to check your car brakes and taillights and everything else on a regular basis but in this case, it has to be done online.  The only question is, what are feasible intervals?  I mean, how much leeway is there between the updates and the checking and the rechecking, rather?

I don't have the answer.  We don't have the answer to this, to be honest.  But, it is something we have to talk about.  I mean, me, just as a pedestrian or, well, rather, I ride my bike most of the time.  I would like to make sure that the cars and the you can interests that I encountered on the street are, you know, being operated in a safe way.  So, I think every company that runs these systems, it's got to be in its own, in their own interest to have these safety checks every once in a while.  But, yeah, the online certification, that's a big one.  That is going to be a big one.  For sure.

>> Okay. I would like to, as we have only ten minutes left, I would like to switch the focus and I'd say in Germany and maybe in Europe, in general, we are, when we talk about AI, we are often saying that we are here to slow and that we are to, that we are all protecting our data and so we cannot proceed with AI and that other countries and other regions are much faster and are not so afraid.

I would be interested how maybe you have another perspective on it all if you are from another region, how this topic is discussed.  So, I wouldn't say that we are feared or that Germany and Europe are feared to be left behind but somehow, this is the tone in which it is spoken.  So we are always saying that in China and the USA, everything is much faster and things are just being done and we might be able to get along with this development.  Is that?

>> So, hello, I'm from Italy.  Even though I'm from Italy and I move to Germany, between Germany and Italy, I can see the difference.  I find that Italy itself is left behind in Germany.

For example, one time, here, I called health agency and yeah, they asked me if like I allow them to record my call and I was surprised.  Wow, what is this?  You know.  So, I can't imagine like, that's true.  Europe is left a little bit below and maybe southern Europe is even more behind than the rest of Europe.  So, I can say that, for example, Spain, Italy, are not the same technological level of Germany or Scandinavian countries.  I don't know, it's my personal experience, though.

>> Okay. Thanks.  So, I take from this, I shouldn't speak about Europe in general, but, there's also differences in between those countries.  Okay. So, the, yeah, to look even further, so, not only Europe, but is this discussion also taking place in other regions, so, ‑‑

>> I'd be curious to learn how Canada is dealing with this.  I'm sorry to put you on the spot.  Apologies for that.  But, you know, from my point of view, this bipolarization between us and the U.S., there's like very, very market driven, definitely less regulated than Germany or Europe, for that matter.

And then, there's China, where you have like the top down government programs that, you know, they just, by pure force, put a ton of money into research and they enable companies to produce product and to be innovative.

And in Germany, we're like, we're neither here nor there.  You know, we talk about a lot of things, and we're very, very good, at coming up with every concern you might have.  So, that is disabling us a little bit but I'm very curious to learn how the situation in Canada might be.

>> Well, thank you for an opportunity to show case my country.  I always like that.  Actually, it's very strange because there's one province in Canada, Quebec, where 25, 30 years ago, the government had this policy of, just investing in research.  Just pure research.

And so, it's so happened that a couple of Universities in Montreal focused on, you know, computer science research, and we ended up attracting a lot of academics to Montreal to work on AI.

And some of these people would have had opportunity to go work for Google, for a criteria of reasons but for a variety of reasons chose to stay in Montreal because they had variety and work there and a University funding research adjust for research sake.

So, this group of people.  They're there.  They're very dynamic and for some reason, they have a very strong bias in terms of, okay, we're developing this AI and want it to serve society.  So I can't say we have huge AI companies coming out of Montreal and going to be the next start‑up, the next unicorn but they're all working on stuff to use AI to improve society in some way, shape, or form so there's a declaration on responsible use of AI that comes from Joshua Benjio who is based in Montreal so they're working on that so even though the environment is very permissive and the government has not particularly regulated that space, the pool there, we must use this AI to do good for society.

As a government, I'm from foreign affairs, not necessarily from the industry department, but our take is that stripping AI with ethics in mind is not good.  We want developers to have human rights framework in mind because ethics will change depending on your culture, your business culture might be, my ethics is I need to provide profits for my shareholders, but if you say you have to provide AI based on the existing human rights framework, you're pretty specific about what it is you're allowed to do and of course in human rights law, it's the government's responsibility to promote and protect human rights.

But, there is the guiding ribs for the UN for private sector where companies are encouraged to abide by what the human rights law says and to make sure that what they do commercially does not harm human rights in any way, shape, or form and to be mindful about human rights in the conception, development, implementation, testing phase, deployment phase and so this is the kind of conversation we have with our private sector on AI.

So, I don't know if that answers your question but kind of in a nutshell is what we do.  So, neither the U.S., neither Germany, neither Europe.  We kind of have our own space.  Thanks.

>> Excellent.  Thank you. Next, back there.

>> Hi.  Good day, again.  Kevin.  Is he kit have Latnet.  I find the topic very interesting that you say in Europe everything seems to be going slow.  We, at Latnik, we are also the Secretariat at Lat IGF, the regional process Internet Governance in South America and Caribbean and we just celebrated our 12th in this year in Bolivia and I would say despite all of us knowing that this is a very important topic for the region, for the very first time in this IGF in the very last session before the closing, we had just a very short discussion about all of 13 minutes on decision making with AI and the ethics around it.

But, it's interesting because whereas you recognize it is an important topic, we are still at the level where we are really more focused only things like community networks, access challenges, protection of human rights online.  So, without knowing, we always look towards Europe to say, okay, that's where the forefront of the activity is.  As a matter of fact, I did a scouting exercise last year looking at trade associations, all sorts of entities and where events on artificial intelligence were happening and 95 of the 100 plus events I found were happening in Europe.

So, it's interesting to hear this comment as the way you feel so you could imagine how we feel when we had three days of IGF process and in the last sector when many people had already left on gone sight seeing, we started to discuss the ethics around AI.

>> Okay. Thank you

>> But I guess that just goes to show that we really enjoy talking about it while other countries just keep developing it, developing, developing.

>> Okay. I know that you have to leave just in time.  It's a quarter past here.  I have more questions here that I would have loved to discuss.  Yeah, if you are also interested in further discussing, you can stay here.  I will stay here for a while, and I guess, we can maybe not in this huge room, but in a smaller circle, keep on discussing this topic that's probably fill enough room until Friday and the end of the IGF.

But, yeah, at this point, thank you for being in the session and discussing with us.  And stay here if you like.  And if not, have good evening and get home safely

(applause)

(Session was concluded at 18:16)

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411