This is now a legacy site and could be not up to date. Please move to the new IGF Website at

You are here

2015 11 13 WS 48 IoT Ethical Considerations for the Digital Age Workshop Room 1 FINISHED

The following are the outputs of the real-time captioning taken during the Tenth Annual Meeting of the Internet Governance Forum (IGF) in João Pessoa, Brazil, from 10 to 13 November 2015. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 


     >> WOUT de NATRIS:  Good morning.  And welcome everybody to workshop 48 on the Internet of Things and Ethical Choices for the Digital Age.  We will start straight away.  I'll introduce everybody when they get the floor. 

     The idea really is that we have a roundtable, so we are going to invite everybody to participate, where possible.  Ask questions.  I'll start with a short anecdote of why we're doing this.  I'm sure you are here because you regret the new website.  The reason basically is around mid November last year, I read a tweet from Eric the Internet Professor in the Netherlands who said "I went to this future conference and I saw The Matrix and Skynet in one."   In other words, two famous movies, with Arnold Schwarzenegger, The Terminator.  I said when somebody like this working on the Internetif he said something like that, something must be going on.  And I met him the day after at a Convention in the Netherlands, and I said: Eric, what did you see?  He said something about the new version of the IBM Watson computer that it was so deep into artificial intelligence that it was scary.  And it's nanotechnology that will be injected into human cells, so maybe two to three years from now.  I said let's do a session on it.  Because is it on the Internet?  Yes, this is about the Internet and it's about the combination of Internet, high speed broadband, and computational power.  And it's going to affect us directly.  What is the Internet of Things going to do to humans, privacy, profiling, algorithm, et cetera?  And where is political leadership?  That was Eric's last comment, I don't see that this is being discussed in politics, in policy enough. Is that changing in the past year?  Perhaps.  We will find out.  That's why we brought politicians to the table.

     So, in other words, let's start there.  We will have an introduction by Maarten Botterman, who is the Chair of the Dynamic Coalition, on the IGF.  He has to leave.  There is a main session on the Dynamic Coalition which he has to join, but we are fortunate to have him here and I'll give the floor to you, Maarten. 

     >> MAARTEN BOTTERMAN:  Thank you.  The Dynamic Coalition on Internet of Things was set up, in fact, in 2008 during the IGF in Hyderabad.  So at least here there were some people who knew very early on there was something here that we have to talk about on the global level. 

     Right now, we've evolved towards a good practice draft on what the Internet of Things should look like on the global level.  Not everyone intends to agree on that now, but to set the points that we need to discuss for the coming year and maybe beyond, on what we need to agree upon. 

     It's clear that the Internet of Things of things is happening and more now than 2008.  Tens of billions of these things will come online.  We will be collected, they will share and collect data, and they will be learning as to their behavior accordingly. 

     It's driven mainly over the last years as business opportunity.  And more and more it's clear that it's also supporting achievement of societal goals in smart cities; it's an excellent example.  But other things are that also after the tsunami disaster, the tsunami networks with voice are rolled out and there is a lot of disaster detection systems and things like that.  It's happening.  We need it.  And to make it happen even more, to benefit even more from what technology has to offer to help to make society sustainable, there needs to be space for innovation. 

     Now, this requires new thinking, sometimes, about laws.  Laws are here to protect us.  Laws are put in place, and laws were never developed with the idea that our society would be increasingly digitized.  And this is happening now.  So how do we deal with that and how do we make sure that we can benefit from everything that it has to offer?  While at the same time, guard those issues that keep us busy as concerns, such as privacy, such as to what choices does an IoT application, an IoT environment make, are made for us. 

     So the focus really is on that idea is that an ethical approach, an ethical stance, an ethical commitment from stakeholders to help us to move ahead in this recently open space for innovation? 

     And today, yesterday, we also discussed that the impact on the particular developing accommodation is twofold.  One is it helps to address some of the objectives, as are spelled out in the Sustainable Development Goals that were agreed by the UN in the end of September.  The other one -- one of those goals is also that you don't only bring IoT to Developing Countries to help make things happen, but you also empower Developing Countries to develop their own IoT solutions, which is quite possible because various -- the combined application is quite enormative and a lot of innovation is taking place.  Many of the components are quite on the shelf and affordable, even, with which you can do a lot already. 

     So with that, the focus on moving forward is, for the coming year, what is a reasonable ethical standard from a Civil Society point of view?  Like this would be acceptable for us.  What is, from the Business Sector, the Technical Sector, the view on what is doable?  What can we make happen to see that comes together over a series of meetings and online conversations. 

     Key elements that we see at the moment are very much meaningful transparency, which means that people don't only need to see what happens, but also need to be able to understand what happens. 

     Real accountability.  So that people know where to go, in case things go wrong.  And this is not purely straightforward, because we talk about a chain of organizations who offer parts of the service that you're benefiting. 

     And also, a real choice.  So that you're not bound to all projects from one provider, and that you also don't have the choice to be either in or out, but there should be opportunities in between, choices in between. 

     So having said that, it became clear that when you talk about meaningful transparency, that you shouldn't be counting on your users to always know what to say, what to choose, what it's about.  If you remember when the firewalls were coming in for PCs, end of the '90s, the first applications, if you would install them, would pop up every minute and ask you this application wants to do something, would you allow that or not?  Well, I am sort of informed about computers.  I'm not a computer scientist, but I had no idea how to answer and certainly I didn't want to answer every minute.  And nowadays it's normal. 

     So also in the Internet of Things similar transparency probably needs to take place, where you back up what you're able to show to citizens with a kind of fairness principle, to make sure that even for those who are not that informed -- I see here something about should algorithms decide your future?  At least know that the principle in which they are approached is a fair one. 

     So I wanted to leave it to that.  But also, with an invite to join the work of the Dynamic Coalition.  It's an open coalition.  And for those who are interested in the ethical aspects of the Internet of Things on a global level, you can find us obviously on the Internet Governance Forum website and I really look forward to seeing you here.  And maybe next year in Mexico, you'll be able to give more content to the concept of ethics and how we will make it possible that stakeholders are here for that. 

     So thank you. 

     >> WOUT de NATRIS:  Thank you, Maarten.  You set the scene for what we are going to discuss today.  I'll put on my glasses first so I can read the four main topics.  First will be education and employment.  The second one is the profiling of algorithms.  Permission as innovation.  And the fourth one is political leadership, innovation and ethics.  I'll have an introduction by one of the panelists on each of them.  And after that, some will respond to that and you're open to questions. 

     I forgot to introduce myself.  So let's do that before I introduce the first speaker.  My name is Wout de Natris and I run this session as the Moderator on behalf of NL IGF, which is present here in the room, also. 

     First we will have a video or a realtime, what is it going to be?  We have a video.  Okay.  So we have to go into the backup plan, but we have an -- a prerecorded introduction by Mr. Coetzee Bester, and he will give us an introduction into education and employable. 

     >> COETZEE BESTER:  Chairperson, I wish to thank you for this opportunity as well as others to participate in this discussion. 

     The theme of the group session is employment and education in the future.  I want to refer to a number of observations that I wish to share with you.  First of all, I want to refer to the world economic forum survey that was recently published on the Internet, under the title "What millennials look for in a job."  

     Basically, in summary, they look for career and advancement, as a first priority. 

     Secondly, they look at the company cultures and the ways that companies do their business as part of an ethical platform and the way that that platform enhances new business opportunities. 

     They look for development opportunities for themselves in specifically technology, tourism and Government structures. 

     Interesting, 91 percent of them will relocate to advance their careers if necessary. 

     A second observation is that of the relationship between employment and development of the individual.  In many countries where I worked, I picked up that young people look at jobs as a short-term, nonsustainable or less security-based employment opportunity.  While career development and development is linked together in terms of a longer term and a more sustainable opportunity. 

     My observations in Africa, the IMF gives us a lot of potential for growth.  Conflict is a huge problem.  Corruption is an even bigger problem.  And while we have resources, young open minds, huge opportunities, and growing access to information, we must still put these into practical working environments to create jobs and to create personal development for young people. 

     I conclude with four guidelines.  The one is create a practical, responsive interaction between training institutions and end-users of that knowledge.  The end-users are not as traditionally conceived as the formal labour market only. 

     Revisit the concept of management of change so that we can prepare ourselves for the new environment. 

     Prepare for capturing and management of new job training.  Those curriculum must be developed and we must be ready for the new training market. 

     And, lastly, emphasize the benefits of in-job training where organizational culturecreate a focus of the student in the working environment, and workplace experience and discipline are some of the benefits. 

     I thank you.  And wish you well with the discussion. 

     >> WOUT de NATRIS:  Thank you, Coetzee.  That sets the scene for developing nations in general, perhaps.  What we learned is that there is still a need to connect in a better way to the Internet that may be different in developed nations, where the Internet of Things may have different implications. 

     Who would like to respond to this presentation first?

     Questions from the audience?  Any comments?

     This presentation speaks to you what needs to happen in developed nations.  I don't know if there is anybody who would like to comment if you recognize what is being said. 

     Deep Silence. 

     Please.  State your name and affiliation, first.

     >> AUDIENCE:  Julia McKeanof the Children's Society Safety Commission in Australia. 

     So training just of people about to come into the job market is not the only solution.  We should be starting with children in kindergarten.  Because they are the people who will be speaking the Internet.  I started -- my strong opinion that reading, writing and Internet security and knowledge need to be hand in hand.  And we in Australia are working towards that by having this specific children's safety Commission. 

     >> WOUT de NATRIS:  What I can also imagine is that what, in general, is being said that machines are going to replace humans in the markets.  Is that a fear that is actually coming through?  You hear about the disappearance of the middle class in the United States.  Is that because of -- is that because of machines stepping into the workplace or is that because of other factors?  But it's something that I think is a point worth discussing. 

     So Barry? 

     >> BARRY LIEBA:  This is Barry Lieba.  When I was a child in the '60s, there was the joke that I've been replaced by a machine.  This is not a new concern.  But it hasn't happened, actually.  What happens is jobs shift and skill sets -- different skill sets are needed at different times.  So I don't think machines are replacing us or going to replace us with the Internet of Things.  I think the Internet of Things is going to do some things and have us shift our responsibilities in other directions. 

     >> WOUT de NATRIS:  So please state your name, first.

     >> AUDIENCE:  Andrew Puddephatt from London. 

     I think the problem is not that the next round of technological innovation will destroy all jobs, but it will destroy the middle ranking, recently paid jobs.  And that the future of work for most people will be fulfillment jobs, jobs of a service nature and very low paid nature.  So the job market will go out to a small number of very well paid people and a much larger job market of relatively poorly paid people.  And I think it will particularly affect the traditional work demographics of men between the age of 18 and 50, who are already experiencing unusually high rates of unemployment, up to 25 percent in many developed markets. 

     So I think we will see shifts.  And there are certain economies to the argument that for the first time in human history, that round of innovation will destroy more jobs than it creates.  That has not been the history.  But there is not the same optimism across the economic world about the future. 

     And as a minimum, I think we are seeing a future of extremely low pay for very, very large numbers of people.  And that's something that I don't think any of the political leaders that we have at the moment are really addressing or coming to terms with.  But that prospect is for the next-generation.  It's not that far away. 

     >> WOUT de NATRIS:  Thank you very much.

     >> AUDIENCE:  I'm TonY Chigaazira.  I'm the Executive Secretary for CRASA, the communication regulators for Southern Africa.  I just want to add that when we talk of education and employability, I think that is where the problem starts.  That we are still thinking with the old mind or the old model where we educate in order for people to get employed.  So I think we should come out of that mode. 

     Because when we talk of Internet of Things, the very next things is we are talking about our jobs.  We should not now at this stage be talking of employment as the main problem.  Because like I see the lady correctly pointed out, we should start right at the kindergarten school in those grades, empowering the children, teaching them to be innovative.  But then -- even then we should start even from the teachers themselves.  Are the current teachers qualified to teach in this new dispensation or the future child will be in the thick of this Internet of Things. 

     So right from the teachers themselves, the lecturers, the teachers, they should be reorientated, and that mindset removed, that we are raising children to be employed by somebody else.  But we should really focus on innovation, so that they are innovative.  They make things happen.  And they don't even waste time worrying about who is going to employ me, but they are more concerned about what I can bring about which will really serve my community, which will make everything a lot easier, everything around me.  That's my brief, Chair. 

     >> WOUT de NATRIS:  Thank you, sir.

     >> AUDIENCE:  I'm Yamae, from the Center on Governance for Human Rights in Berlin.  And I work mainly in human rights education and I do a lot of online education, master degrees, et cetera, in terms of human rights. 

     And I would like to counteract a bit the pessimistic view here and see the glass half full.  What we have seen on the whole online education business over the past year is really a growing number, particularly of the target group, the 18 to 30 year old ones, young professionals, who want to get more capacity and also in particular in the ethical issues. 

     That said, I think one of the core issues is that our traditional education market, if I want to call it that, the way we have set up also higher education, is very much still bound on two things.  First, on sort of offers on the place jobs.  People have to work in the territorial country where they are from.  This is what these trainings are assigned for.  And it depends largely still on citizenship.  You cannot just, if you have a qualification, you cannot just move from one country to another and work there.  You have to have a lot of hurdles.  And therefore this whole online world where citizenship doesn't matter,where you need a qualification and if this qualification is needed you just jump in and do the job, it gives a new opportunity. 

     And I think that we not only have to adopt our education system to the situation, but also our job market.  And therefore I would like to disagree with the sort of the pessimistic view that this technology will destroy more jobs than they will create.  I think we haven't really bridged the gap yet.  Let's say the 20th and 21st century, we are still in the mood of the last two thousand years that you work where you are born.  And I can also see that those people who go through this new kind of education type, being more flexible, being more adaptive, and they do find very good jobs, particularly in the IT world. 

     >> WOUT de NATRIS:  Thank you. 

     The final comment on this section.  Sorry.  That gentleman.

     >> AUDIENCE:  Thank you very much.  I'm Jimson Olufuye, Chair of the Africa Alliance, an alliance of ICT in Africa. 

     This is a very important topic.  I want to make sure that I'm here.  I support the last statements and corroborate the fact that there will be new opportunities, particularly for the developing, least developed world.  Because if you look at the ICT evolution thus far, in Nigeria alone, ICT has created millions of jobs as a revolutionized economy.  It changed the economy from 20th in Africa to number one.  So there is huge potential right here.  Knowledge will sharply increase and human beings will benefit more with respect to that. 

     And, lastly, I would just like to caution that, because I've been hearing people say that machines will just take over everything, no matter what, whatever we produce must be accountable to the stakeholders.  We remain the stakeholders.  So theymust be subject to us at the end of the day.  They must serve our purpose, and that purpose is why we're gathered here, fulfilling an increasing society for everyone around the world. 

     Thank you. 

     >> WOUT de NATRIS:  Thank you, sir. 

     We will move to the next section.  But I think that concluding we can see that there are two very different opinions in the room on employability and education, which is a good thing.  What I will state, what did the people working and tilling the field manually think in 1850 when the first sort of tractor machine moved in and made their jobs redundant?  They moved to the city and found different jobs and they were industrialized.  And we found that the jobs that we do nowadays, and what will happen next, like Barry said?  Things will shift again.  The question is when and where and how exactly. 

     But one final topic that I want to reiterate is about the teachers that have to teach the new generation.  Are they really up to the task that is set out for them, basically?  And I think that is a very good comment you made, sir.  So that is something which we have to take home, and think extra about.  Because the world is changing, and is our educational system adopting to that or not?

     We will move to the second topic, which is on profiling algorithms.  And we have -- I have to read the name properly.  Joanne Bronowicka from the Centre of Internet and Human Rights in Berlin.  And I'll give the floor to you.  Thank you. 

     >> JOANNA BRONOWICKA:  Thank you.  I'm from the Centre of Human Rights, and we are a research institution and we did a project called Ethics of Algorithms with the Dutch.  We think it's an important topic for research because it will be a good topic for policymakers and we need good evidence.  So if you want to learn more about the project, we have wonderful stickers that will take you to the Ethics of Algorithms project website.  And we have a publication there.  And we -- it's called FromRadical Content to Self Driving Cars."  So we see that the ethics of algorithms is a broad subject that includes search engines, the news feed, algorithms used for hiring or pricing your health insurance.  It can be used to predict crime or to scan for terrorist content.  And of course it's related to the Internet of Things because those also will be governed by algorithms.  And self driving cars are one of the most popular topics where ethical dilemmas are presented. 

     But I want to talk about regulation and ethics, because I think it's important to make a point that regulation is not enough.  And we're talking about ethics, because regulation will not be a magical wand that will solve a lot of these dilemmas that we're facing. 

     Profiling is one of the few areas where regulations actually have been quite successfully discussed.  Partly because in Europe we have this discussion about general data protection regulation and their profiling.  It was an opportunity for -- to raise awareness about this issue.  Profiling is a specific type of algorithmic data processing where, based on your preferences, you can be put into a certain category.  And a lot of people were worried that this type of profiling can lead to discrimination. 

     And I think in this specific call of algorithmic regulation, what is very important is giving the users the right to know.  So notifying them that their data is being processed.  And notification including the right to correct that personal information. 

     And the general approach accepted by the Council of the European Union in the summer, this kind of perspective is included.  And it seems like the regulation will include the notion that the data subject, so the user, can -- should be informed about the existence of profiling and the consequences of such profiling.  And I want to give an example of what the consequences of profiling can be and why regulation might not be enough and why we really have to look at ethics.  So maybe some of you are familiar with the example of Target, who sent to a young lady advertising coupons for baby products.  Her and her father didn't -- her father didn't know that she was pregnant.  But based on what she purchased, unscented lotion and other things, the algorithm was able to predict that she was pregnant.  It's probably not illegal that markets can use this data as well, but we can ask is it ethical?  Because we know based on other research that when women are pregnant, they are most prone to change brands.  So it's very smart and tactical of companies to target women at that time.  But of course we should ask whether it's ethical. 

     So I really think that it's a very important area for research.  And it goes beyond one law, like competition law or data protection law.  It's a challenge that arises from a mixture of automated decision makers, because these algorithms sometimes are designed to kind of correct themselves.  And even the engineers that design them sometimes don't know what the outcome will be.  And, of course, value judgments about morality.  So it's a very interesting intersection.  And the basic premise is that companies, programmers, and algorithm can have ethics.  And we should continue the debate about not only legislation, as I said, but ethics as well. 

     Thank you very much. 

     >> WOUT de NATRIS:  Thank you, Joanna.  And of course this is a major topic.  Just everybody has one of these or almost everybody, and what does it exactly do with your private data, which is on there?  We just really don't know.  And what is actually going to be done with that data, just like Joanna said, who is doing what with it, and by companies that we never heard of and that we don't know even access our phone every day.  So it's a topic that I think merits discussion. 

     I'll check with Alejandro, if he is online or not.  He isn't?  We will just move to the room and I'll invite you first to speak,  Gry. 

     >> GRY HASSELBALCH:  Thank you.  When we talk about algorithmic profiling and predictive computers in general, then the next step is machine learning, talking about cognitive computing, which would normally include these kinds of algorithms.  And what I want to say is algorithms, what we need to focus on is algorithms are the ones that make value out of data.  And the industry is aware of this.  If you look at all the biggest tech industries around the world, they are all heavily invested in this learning.  AI and Facebook and Google, all opening new research departments, hiring AI and machine learning staff and dedicating large quantities of their budgets in this area. 

     And so what we have here is an evolving type of economy based on finding patterns in data, creating profiles, predicting and responding to data.  Making meaning out of data and transforming it into value. 

     And, of course, there are some called the algorithmic economy, so we go from big data to algorithmic economy.  Because it's the algorithms that are the value makers.  They are the recipes of success for business.  And so, of course, they are based on subjective assumptions, interests, maybe biases.  I don't know.  Commercial, governmental, scientifics, and so on.  And what that means to me in this context is that this new type of speedily evolving economy is actually developing with a whole set of new businesses, whole services, products, infrastructure, with no ethical oversight and no public scrutiny.  So there is a lack of transparency in the proprietary value making algorithms.  And they are deployed invisibly.  The people they act on have no access to them.  Their basic functioning and source code are secret, most of the time, secret as the recipe forCoca-Cola.  They are offering new start ups, which is nice, great, we are developing a market here.  But the way they offer them is symptomatic of this, what Fran Pasquale called the "Black Box Society."  If you didn't read it, please do. 

     So what I mean here is that they are offering the APIs, the tools to use these algorithms.  But that means that not even the developers of these new services, based on these, knows the recipes of what they are developing.  So they are provided with the tools to create meaning out of the data, but they don't persist the actual value, which is the algorithm that creates the meaning. 

     So one developer clouds, and one service that I found, the Watson Accuracy System.  Do you want the example?  I can also stop.  But it gives a good point of the ethical implications of this.  It's the Watson's ecosystem.  In this, there is a career matching service that has again developed with this -- in the system.  It's called Uniti S.  And it's all about matching apotential worker to a potential employer, based on creating a cultural fit.  So there is a secret algorithm that not even the developer knows, has the access to, that is deciding how, based on, for example, a person's social media history, how this fits into a future potential workplace. 

     And so I'm just saying, imagine if this kind of services are transformed into an IoT environment, where we can be tracked in all our physical beings and how we behave in the world.  And this is just -- if this is the way that -- I mean, how can Iquestion the ethics of an Internet of Things or an algorithm if not even the developer himself knows exactly how the product's algorithm is designed to act on data? 

     So again, how can I put a health warning on a product if I don't know -- even know the ingredients?  So that was my...

     >> WOUT de NATRIS:  The name that you just said, can you mention the name again? 

     >> GRY HASSELBALCH:  Frank Pasquale

     >> WOUT de NATRIS:  You wanted to say something about the topic. 

     >> GUILHERME CARELA de SOUZA GODOI:  I would like to start with a story.  Last March I was with a good friend Susan Linkand Frank La Rue in the Commission on Human Rights on Children's Issues and the Ethical Dimensions of Media and the Internet.  And Susan Link, a child psychiatrist at the Harvard Medical School, was telling us about a new Matel toy called Hello Barbie,which is a very freaky thing.  It's a doll that is connected to the Cloud.  And the doll records what the child is saying when the child is playing with the doll.  Sends the information back to the Cloud.  Toy talk, which is a Matel partner, with their algorithm, they process the entire thing and send the conversation back to the doll and the doll interacts in this way with the child and the family.  And, of course, the implications, not only the ethical implications but the human rights and the privacy implications of these kinds of new devices or Internet of Things, it's amazing and astonishing.  And particularly in the case of the children's rights.  And in this relationship with the toys, even other kinds of implications from an educational point of view.  As probably most of you know, the relationship a child establishes with her or his own toy is one of the most intimate relationships that the child has in the first stages of development actually establishes, and this is of course something that we need to think about

     So that is not surprising that when UNESCO started the worldwide consultations for the Connecting The Dots Conference that most of you joined in Paris in earlier this year, one of the key issues that called more attention of the people that participated in the Convention are actually the dimensions of the knowledge societies.  And many of the things that were already said here today by different previous speakers, we are talking about transparency, accountability, we are talking about self regulation.  And all the challenges that this proposes for the companies involved and actually for all of us. 

     Very briefly, what came out from the Connecting The Dots Conference on this particular issue of the ethics, again, people that participated there talked about promoting human rights based ethical reflection, research, more public dialog on those issues, incorporated the educational component.  Many of us discussed this in the previous section.  And teachers discussing those issues, which is not easy at all if we are here with lots of doubts on those discussions, this is not a very -- it's a very complex issue. 

     And how we incorporate the Human Rights based aspect in these ethical discussions were very much underlined during the Conference in Paris, and of course considering all of this as a transboundary nature.  Many people in the Conference also underlined the importance of promoting a global citizenship education, with different components of capacity building and research, exchanging of good and bad practices in these areas. 

     So it's a very important issue.  And I agree with our initial speaker that we need to discuss, yes, regulation aspects.  But also ethical self regulation aspects of this.  And this example of Hello Barbie is very interesting.  Because when youintroduce the child component in this discussion, it's even more complex, because we need to take advantage of a multiplicity of expertise to do this ethical discussion.  And of course UNESCO is very much worried about what is going on.  Not only -- not only of course into the risks, but also of the amazing opportunities that the Internet of Things is also offering to us. 

     Thank you very much. 

     >> WOUT de NATRIS:  Thank you, Guilherme. 

     Michael is standing there.  And then you and then we will move to the next section.  Michael.

     >> AUDIENCE:  Hi.  Michael Kaiser from the National Security Alliance in the US.  I think we have to change the name from the Internet of Things into two pieces.  You have the industrial side.  So you'll have the -- you know, corporations who are going to be using centers all over the Internet to tell them about their business, maybe to tell them about the customers.  Maybe there is a middle ground and that will be the five thousand centers in the Boeing engine that is flying across the Atlantic, saying it's okay, it's not okay.  And we will be able to work with that. 

     The other part we have to change and call the Internet of me.  Because this is about me.  This is data that I'm generating as I move through the world.  So it's not about things.  That implies these inanimate objects that are talking to the world.  But it's me talking to the world.  And it's the things that I connect to the world that tell the world about me.  And I think that changes the dynamic for users.  Because it engages them more in thinking about how -- you know, is this useful to me?  How is this data going to be collected?  How is it going to be used?  Where is it going to be stored?  Who is -- where is this data going to be connected to other data in my life?  And I think not only from the things, because we're not only talking about things when you are talking about, you know, one of the big issues in the United States is student data.  All the data that is being collected about students?  The classroom and how does that get connected to other things that they are doing. Or is there discrimination in targeting of kids young.  This kid will never be a data scientist, which will be a huge area of jobs in the future, because they didn't do well in second grade.  So we have to be careful about how we do this and how we talk to people about this.  And we have to empower people to make choices.  And we have to teach people how to secure the devices.  There is no way to know about the security of devices when you hook them up. 

     You had a case in the United States where baby monitors were hacked.  And video monitors were watching kids in their houses.  So there is a huge amount of work that needs to be done in this space, both ethically and education wise.  So we have a challenge ahead of us. 

     >> WOUT de NATRIS:  Thank you.

     >> AUDIENCE:  Argyro KaranasiouAssociate Professor in IT law at the Center of Law and Policy and Management, from the University of the United Kingdom

     I just want to bring wearable tech into the picture.  So that's a specific area, when it comes to iterative things.  And it seems to me that we seem to be discussing in separate ways ethics and law, whereas we should discuss a combination of the two.  And we can't really have a concrete answer just by relying on ethics.  We can't -- we don't have a concrete legislative framework yet, so we have to push and try to come up with something better. 

     And let me give you some examples.  Example number one.  In 2011, Deep Mind, the artificial intelligence start up in London, was bought out for a substantial amount of money by Google.  And that was only under one term.  The CEO for the startup said I'm happy to do this, just one term.  You have to set up an Ethics Committee. 

     Story number two, when it comes to wearable tech, like this Fitbit I'm wearing, which measures pretty much everything, and it's not here, it's not on my phone, it's in the Cloud, which pretty much means I don't know where my data is, but I know that I did not get enough sleep, for example, last night.  And it's not just me knowing that.  So other people. 

     So it's actually -- the working party Article 29 in the European Union, and in a comment they have said that consent is not really working.  If you see that, there is no screen where I can touch, "I consent."  So we have to build a better legislative framework alongside with ethical values.  And this requires transparency, but it also requires accountability. 

     So if I'm denied a loan, I have to know on what grounds I was denied this loan.  There is certainly some data that has been given to this potential employer, who knows that I will not be good enough for the job because my sleeping pattern shows that I'm not the perfect employee.  I need to know on what grounds the decision has been based.  So it's not just the algorithm itself, it's also how this has been used to form a decision, which is something easy to figure out. 

     And the last thing I'd like to mention is that my main concern is that I wouldn't want us coming from a purely legislative perspective.  I wouldn't want us to be treated as mere consumers, but I would like us to be seen as human rights subjects. 

     >> WOUT de NATRIS:  Thank you.  Some very sensible questions and comments. 

     What it shows, basically, I think summing up, is that there is a lot of black box going around.  The title of the book is excellent because we just don't know or do not know enough.  And there is a question, do we need regulation or is selfregulation enough?  And where is self regulation enough and where perhaps should it be thought about, regulation?  I think those are the topics that we can bring home. 

     We move into the third section, which is the right stepping stone, which is being made.  Because something -- what makes this all possible is "permission is innovation," that concept which on the Internet of Things we can use this for anything.  And if you find an application, we can put it on there. 

     So we have Jari Arkko was on the panel.  If you wanted to see him, I'm sorry, but he had a conflict with the schedule.  But we have Barry Lieba.  He worked for IMR and he is the Director of Applications and in the realtime area and with the IETF. And you'll explain what the full title is. 

     >> BARRY LIEBA:  Hi.  I hope I don't disappoint you.  So we have had what we now call the Internet of Things.  Pieceof this have been around for a long time.  We have had building automation, home automation to a lesser extent but that is increasing.  Vehicular and traffic automation and lots of other stuff.  Soon we will think of all of that as part of the Internet of Things.  But much of this has been done with proprietary systems.  The Internet of Things is built on the Internet.  The Internet Engineering Task Force, my organization, we have built that.  It's critical that we build our IoT solutions on those standards for interoperability.  Give consumers a free choice and letting end-users pick the applications and solutions that work best for them and freeing them from one vendor custom designs.  So that's the first ethical point, that users need choice. 

     A standards based system built on IPv6 for accessibility, using standard Internet Protocols as well as the use of many companies, and that's supported by many services and many billows of the Cloud.  That choice for people to make a choice in small parts of the system fosters the -- that -- I'm hearing feedback. 

     That's been the underpinning of the Internet from its start.  And that's the reason the Internet has been able to change our lives the way it has.  So the second ethical point:  Builders of the systems need to be able to build a small piece of it and plug it in with everybody else. 

     As we make this diverse IoT casserole, we have to put in spices from the beginning, and adjust the seasoning to make sure that what we get is going to fight harvesting and collecting, share it with the minimum parties necessary, give users the appropriate controls. 

     The thirpoint, security and privacy have to be in there.  It's important to have a network that allows the applications to work without artificial impediments.  The IETF is working in this area.  We have what we call constrained environments.  Those with limited power and limited computing ability, limited communications access and storage.  We have applicationprotocol, authorization and access controls, routing protocol, et cetera.  It all provides the next innovation that will stun us that no one had to ask permission to create.  Other standards organizations are building on this, too.  W3C has a Web of things interest group that is building on the Web services platform.  The HTTP 2 Protocol and the constrained applicationprotocol, key substrate to all of this.  But the point is the www changed how the Internet is used and changed our lives.  And I see IoT as another technology that will move the Internet in a new direction and fundamentally change our lives again.  We have to be careful that we build on it a standard platform with the right controls, and permissionless innovation.  And that's the thing. 

     >> WOUT de NATRIS:  Thank you, Barry.  The IETF is the Internet Engineering Task Force.  So that's where the Internet is built, maintained, and standards built that make it do what it does for all of us. 

     What I'd like to go flow into the room is that in the 19th century, in the UK you had a group called the Lodites, that were smashing machines that were taking jobs out of place in the fields.  Things started to be done automatically and people just smashed the machines because their jobs got lost.  Nowadays we are afraid of technology.  Some of us are afraid of technology.  Others are more optimistic.  Some are more pessimistic.  But the difference is that then only the rich people could afford that machine and make it work for them and replace the workforce. 

     Nowadays, just look around.  We are all carrying something like this.  We all carry laptops, we carry everything that drives this whole discussion that we are having at this poment.  We are using it.  And we're part of the product.  Basically, if it's free on the Internet, it's you making the money for someone. 

     So the main question I want to put into the room is do we stop our mission as innovation because we have had enough and we are afraid that it will proceed?  Or should we continue the way it is?  But that's something else.  What is the sense in the room about how things are developing?  Is it the right direction or should we start being very careful?  Who would like to open it up? 

     >> MARIETJA SCHAAKE:  I'm a member of the European Parliament and I'm going to ask a really stupid question.  Bear with me.  Has innovation ever been permissionful?  You know, permissionless?  What is it in comparison to?  Has.innovation always pushed boundaries without permission?  I'm trying to understand why this is so specific. 

     >> BARRY LIEBA:  I'll give you one example.  In technology today, and I don't mean to pick on any particular company, but if you want to write an application for the iPhone, for instance, you need to get permission from Apple to publish that. 

     That's an example.  And there have been examples like that in the past with various arenas.  The point of standards is that everything works, without having to coordinate with anyone else. 

     >> WOUT de NATRIS:  Thank you, Barry. 

     There are no stupid questions.  I think that's a fair question.  Because yesterday I was Rapporteuring for the main session on zero-ratings, and I didn't even know what they were.  And I came back -- so this is where these comments I read about every once in a while come from.  That there are walled gardens created where people get free on the Internet but you can't access the whole Internet.  You can just go to Facebook or Google, and people think about the Internet as being Facebook.  That is the Internet forum because that's the only thing they can access.  So like you explained, you'll not have a lot of innovation.  And that's where people who are posing the concept came from.  You have to access the Internet to have the full potential to develop yourself and innovate yourself. 

     I saw a finger.

     >> AUDIENCE:  I'm with the Center of Government for Human Rights.  You were asking the question whether we should worry ornot I would say in general, not.  There is nothing new on this planet.  We are always worried about how to regulate, you know, sort of innovation and how to implement human rights.  But I think there is one point that is different to the 19th century that you were mentioning:  The magnitude of this data.  And the pace, the speed.  And there often I do worry whether we are ready to catch up with its speed and the pace and this magnitude. 

     Because this is exterritorial.  None of us lived in the 19th century and we don't know how the people felt when all the industrial compounds were built.  And I'm sure it was overwhelming, too, but it was still sort of tangible.  It was something that you could touch, you could estimate, you could feel it was within your territory.  But this one is beyond.  And yes, I do think we have to be quite creative in how we will manage this.  I do not have the answer, but people often ask what is the big difference now that we're facing?  And again I think it's the magnitude and the pace. 

     >> AUDIENCE:  Lusha Jarjujinski. (sp) I'm from Mascedonia, from the foreign ministry, but I'll be speaking in my personal capacity. 

     This is something that bugged me ever since I heard about IPv6 ten years ago and the sheer number of addresses.  The number of grains of sand in the volume of the sun.  And enormous amount of IP addresses that can connect everything, and this is what we're discussing here.  And another thing that kind of a sentence that keeps on pounding in my head again, over the past ten years, is this, that the biggest shortfall of the human species is that it doesn't understand the exponential function.  And that explodes at one moment.  And when that happens, especially with technology like this, that can connect everything, a lot of unpredictable things are possible.  So when we are speaking of permissionless innovation and opening up information and all these things that we have seen over the past ten years, there is a new element here.  And that is that everything is being connected.  Machines are being connected.  So there are a lot of examples of artificial intelligence right now becoming very, very intelligent.  And we have it in different forms.  So all of a sudden we are creating the possibility for those different types of artificial intelligence to start communicating with each other. 

     I saw two years ago an example of programs learning to communicate with each other in a new language.  So how to create their own language, robots created their own language, and they turn to each other and start communicating to each other exclusive of the humans that programmed them.  Perhaps I'm afraid as well about this.  But I'm more afraid of that kind of technology being in the hands of a few and being used -- so by humans, being used against other humans.  So a very exclusive number of walled communities using it against others.  So I just want to hear the engineering.  The engineers talk about the silos, the possibilities of containing something like this.  Permissionless innovation is great.  But right now we're not just talking about ones and zeros.  We are talking about everything around us being connected and being possibly in the hands of somebody who can control it. 


     >> WOUT de NATRIS:  Thank you.  Before I turn to the next question that will be the last one, seeing the time, is there anybody from a major corporation that makes money with this sort of business, and is willing to speak up?  Because as you can see, we're talking about them.  We could not get them in the panel. 

     Is there anything who would like to speak up on this part or -- I'm just looking around. 

     And I see no fingers.  So then the floor is your, sir.

     >> AUDIENCE:  My name is Hirofumi Hotta, from JTRS, from Japan. 

     Permissionless innovation will come through utilizing the basic function of the Internet, one of which is openness.  Most of the service or application examples I heard here don't have to be on the Internet I think.  Of course the Internet can be used for the purpose of easy implementation of the service, with global switches.  But IoT can have the function of joining the information from multiple information sources on the Internet.  The issue of ethics in the sense of joining the information from multiple sources may be different from that proprietary individual service and should be a more difficult issue. 

     Thank you. 

     >> WOUT de NATRIS:  Thank you, sir.  Sorry.  I'm going to close the section.  I'll ask Barry, because you were addressed again as an engineer by the previous gentleman, would you like some final famous words on that topic? 

     >> BARRY LIEBA:  I have no comment on the penultimate statement.  I've read Asimov.  I'm not Asimov and I don't have a comment on how this stuff can get away with us.  On the last comment I agree with what you said.  A lot of the applications don't have to be on the Internet.  The point is being able to connect different pieces of data with different other pieces of data and create better services, but there will always be some things that don't need to have that and can just be in their closed environments. 

     >> WOUT de NATRIS:  Thank you.  And I'll retract my previous words.  I'll give you the word anyway, sir.

     >> AUDIENCE:  Hello, Marcel from Google.  I think all of the concerns are valid.  Once the things scale, we have to get it right right from the start because then it gets really hard to correct after the fact.  If we keep in mind if we focus too much on the worst case scenario, we have to remember they are not probably the rule.  So we have to make the room for the best case scenarios to actually flourish.  That's it. 

     >> WOUT de NATRIS:  Thank you, sir, for that comment. 

     I think again we heard very opposite positions, but we know things are moving forward and we know the technique behind it is making it possible.  But we have challenges that we have to face, and that's the nice next bridge in this conversation, because as Eric Kaiser said, where is the political debate?  I don't see it.  It was a year ago.  The fact is, is this debate going on now a year later? 

     And I'll ask the two ladies sitting next to me to introduce the topic.  I introduce a member of the European Parliament for the Democratic 66 party and I'll ask you to start.  Thank you. 

     >> MARIETJA SCHAAKE:  Thank you for getting up after what was a fun night, and I thank IGF for their leadership in this space.  I tend to think globally.  But it's always nice to see the country I know best in taking a role in addressing Internet governance and putting people together. 

     I'll try to be brief.  In general, I think even though there are two politicians sitting at the table here talking about Internet of Things and Internet governance, it is still an exception.  And it is high time that the discussion on, well, basically, the digital ecosystem including the Internet of Things is elevated on the political agenda.  But that does not mean that we need new rules on everything.  Having an informed discussion in all of our societies doesn't have to lead to legislation necessarily.  I think what we will see is there will be on the one hand multistakeholder norm development and on the other hand we see now in a de facto manner the market drives standards.  And I think Governments should be much more proactive and ambitious in safeguarding the public interest and the fundamental rights of people, as was mentioned by one of the previous interventionists as well. 

     One of you also asked can Governments still keep up with the pace of technological development.  And the answer is no.  It's a real challenge.  Because technology develops, you know, at a super rapid pace, the system's getting smaller, faster, cheaper, every day.  And it's almost impossible for Democratic decision making to catch up.  And other types I don't favor. 

     But, still, the success  or the benefits that can be harnessed in the public interest from technological developments, including the Internet of Things, depends very much on the trust that there is in these mechanisms.  And the trust that can be put in the technology.  So just a few aspects.  There are some areas where politics has a very clear and defined role when it comes to the Internet of Things and the roll out.  For example, the availability of spectrum.  It's something that has traditionally been in the hands of Government, and is absolutely necessary for the well functioning.  The same with  withdrawing specific spectrum.  For example, G2 would be taken out of the market, let's say, that would also have consequences potentially.  So I think that is a very specific aspect.  And net neutrality laws are another very specific aspect that are going to also impact the Internet of Things. 

     But much more, I believe we should seek an integrated approach, where aspects of data protection have to do with the Internet of Things.  When is it data personal?  When is it not personal?  Or when do we see a slippery slope and how do we treat this?  Security, of course, certain elements are important, as well as the need to make technology neutral policies. 

     Now, I think Governments can Commission research, so that there is more evidence-based discussion here.  And so that it's not always driven by actors that have very specific interests.  And I agree that the anticipation of shifts in the labormarket, anticipation of shifts in education and the question of where value will be added is one that the Government should engage in and technological evolution is one key element to focus on. 

     Standards are often set by legislators, so interoperability but also the laboring of products, the expected lifecycle of connected products, for example, will be interesting, I think, to also see how the ecosystem of an Internet of Things would develop.  And then questions about disclosure vulnerabilities, transparency, breeches of data, and how that would be reported is of course a hot topic more broadly but also relevant here.  These are all questions that are not just political questions, but certainly also touch on the engineering aspects of this whole debate. 

     There are very specific legal question, and I know Arda is going to say more about the ethics, but I think we have to also distinguish between specific categories of risk.  Clearly there is a difference when your smart T-shirt is being breeched or a smart train or an electric grid.  And the question is to what extent a breech in one product can affect the whole network.  But we have to compartmentalize this and see easy barriers to infection.  But in order to be specific in the policies and measures that we require, we should have levels of identified risk and vulnerability. 

     I think I'll end there.  I wanted to bring one question to the discussion here.  I also serve as a commissioner on theGlobal Commission for Internet Governance.  And when we had a discussion about the Internet of Things, one of the questionswas:  Is the Internet of Things an Internet?  And this may sound really philosophical, but I'd be interested in your questionof whether it is the Internet as we know it, or whether it will become sort of a different network.  Does it depend on whether people are connected to the machines or whether it's just machine-to-machine and how you see this?  I'd be interested in that discussion. 

     Thank you very much. 

     >> WOUT de NATRIS:  I think we heard what politicians in general are struggling with, but also what they need to step up to. 

     We have a second politician.  She is a Senator in the Dutch Parliament and she is on the hot line for child pornography in the Netherlands. 

     >> ARDA GERKENS:  There is a lot of debate about what is the Internet of Things.  And some people say the Internet of Things has always been there, but now because it's broadening we are giving it a name.  And it's more than just IT in our lives.  We have the Barbie mentioned, which are the cameras in the toys for children.  We are talking about Barbie, but the Nintendo and others are already in our children's lives. 

     Let's just assume that anything is and will be possible.  And I think we are coming to the point where we think do we need to draw a line?  We have knowledge-based technology.  People who are into knowledge always want to gain more knowledge, just like law enforcement always wants to gain more data to research.  And where is the balance in that?  I heard the person from Google said that Vint Cerf said once it's scaled, it's too late and you can't rescale it.  It has to be scaled from the start.  I think we are beyond that point.  We don't really care about the privacy, we are almost just addicted to all the applications we have.  We see opportunities.  We see possibilities.  And then this is how Facebook grew.  First we get on Facebook, and then we think, if we think, oh, well, there are also down sides to it. 

     So in my opinion, this discussion is not about how data is used or restored.  But exactly the point before we are starting to use that data.  Do we want this application in your lives and what would be the consequences if we have the applications inour lives?  What are the consequences of AI?  What would be the consequences of not having teachers but just iPad things teaching our children? 

     Like the Barbie having cameras in in their toys, having the kids monitored.  We know in some locationthere will be a camera, and in the Netherlands we have cameras on the streets.  But in my home there is no camera.  The camera is dead at my place. 

     So these children are growing up with the idea that they will be monitored all the time.  So what does that do to a human being?  You can't be private anymore.  Maybe in the bathroom you still can.  But nowhere else. 

     And companies just are building apps and possibilities and things which have a great impact on our lives without thinking about it.  It's why I joined the Dynamic Coalition two IGFs ago in Bali.  The company who ran the smart meter in the UK said, you know, people are not that interested in privacy if they are at home at 3 o'clock in the night or not.  And I was like okay, who are you for me to decide?  Your company deciding that I don't care if you know that I'm there at 3 o'clock at night or not.  I don't think laws will solve this problem.  It's about ethics, and there needs to be morale standards.  In ethics, you don't need to argue why you want it or don't.  So you don't need rules like the lady said, why is this data in the Cloud or not?

     One minute.  Sorry.  If you say why, then the objections need to be solved.  We can just argue it.  We don't want it because it's more and more not accessible. 

     Now to conclude.  You said politicians are not interested.  When we had the Snowden affair come to the Netherlands, we had a debate about all of the data being connected.  And I'm happy to say that I had a motion which was unanimously supported by the whole Senate, where we asked the Government how to institute or have the Institute who does Referral on Society andTechnological Development to see whether or not we need an Ethical Commission on the digitalizing of our society.  Just like we have a Medical Commission saying yes, we can clone humans; no, we don't want it to happen. 

     So I'm looking forward to the answer.  But I'm pretty sure that the Institute will agree on my point.  Yes, we do need this Commission. 

     >> WOUT de NATRIS:  Thank you, Arda.  I'll ask to get the video of Coetzee ready and then Y.J. Park will make her intervention.  She is with -- I have to put on my glasses.  She is a professor at Stage University of New York in South Korea.  So YJ.  Please. 

     >> Y.J. PARK:  I'm not a politician.  But let me try to respond to, like, your perspectives of politicians.  So first,like you were asking, whether IoT is Internet.  And my understanding is it's all about communication.  I think one gentleman was also challenging is it really technology?  I think it's really -- everything is about communication.  Like communication between human beings and machines, but also communication between human beings and human beings are through this machine. 

     So in some sense, like, communication is really important.  But the thing is, how can we effectively communicate with a machine?  Because when we communicate with each other as human beings, we all the time are easy to have communication among each other.  And can we operate without miscommunication with the machines?  At this stage, some gentleman mentioned that we definitely have some control over this machine.  But we don't know how long we are going to have some control over this machine.  And so we have some, you know, the kind of, like, same understanding about this whole framework. 

     And moving on to this ethics and innovation issues, I think it takes, for me, it's more like culture.  But this kind of culture framework is not quite translated into our ethical discussions yet.  When you talk about Internet, it's all aboutGlobal One Internet.  That means like, as Barry said, it's like a standard platform.  That means that we assume a culture might be standardized.  And we have a challenge of standardization in privacy. 

     So going to this innovation dialog, back in Korea, one of the challenges we have is a lot of the small, medium companies who try to develop IoT applications, they have a challenge of developing this privacy friendly application.  Because unlike Google and unlike those big companies who are able to, like, figure out regulatory barriers, which can be translated to some agreeable applications on the platform, the small and medium companies, they don't have the resources to figure out how to adapt this agreeable privacy framework.  And so they lose a lot of opportunities.  And that might be applied to a lot of the companies in the Developing Countries as well.  So they have to -- so they have kind of the challenges that we have and how can we come up with the innovation that can get along with ethics, which in some sense is like a cultural diversity. 

     So we kept emphasizing one global Internet doesn't mean that sort of are we going to have like one ethical framework which can be applied to all over this Internet of Things?  But as I said earlier, to me, like the Internet of Things is all about communication. 

     >> WOUT de NATRIS:  Thank you.  We will have the video of Coetzee Bester made on this topic. 

     >> COETZEE BESTER:  Chairperson, thank you very much for this opportunity.  I would like to thank you, UNESCO, and the IGF for creating this opportunity to have the remote contribution to this discussion. 

     The theme of this session is political leadership, innovation, and ethics.  In three minutes available, I wish to contribute on my own involvement in Africa as well as some observations that I made in subSaharaAfrica. 

     I served in the South African Parliament in the first post apartheid Parliament 1994 to 1999.  And I had the privilege to serve in the Constitution writing body with former President F.W. De Klerk and our following President, Nelson Mandella. 

     Subsequent to the parliamentary experience, I worked in 25 African countries with leadership development programs and the creation of multi-party Democracy structures.  During this time, I made some observations which I think I would like to have as an input into this discussion.  First, on various matters, the correct political role is spoken but not practiced. 

     Secondly, conflict and corruption is detrimental to sustainable development and growth in many African countries. 

     Thirdly, the status of political leadership and that of political representatives on all levels, local level, provincial or regional level, as well as national level, needs to be renovated and reconstructed.  If we don't have better quality politicians, we will not have a stainable ethical platform for development in those countries. 

     My next remark is that Africa has a positive environment based on its potential.  The potential that follows from a constructive approach to the management of power on the continent.  Through the Africa Union, the near pat structures, the Africa peer review mechanism, and the few others that are available which discriminantly can maybe be includein their recommendations.  Africa has the potential for energy through inter alia the Inca project in the DRC.  We have West Coast African oil, which is vast and over a lot of International borders.  We have ample opportunity for food production in Central Africa.  And we have opportunities for holidays on the East Coast of Africa. 

     The International participation and the growth of involvement of African leaders in the International arena must also be observed.  The way in which Africa pushed for participation in the United Nations, in the IMF, the World Bank, as well as intercountry agreements like BRIGS, as well as the network of undersea cables, all create a better environment for Africa's political involvement Internationally with direct results on political leadership, innovation, and ethics. 

     I thank you. 

     >> WOUT de NATRIS:  Thank you, Coetzee.  Again, it shows this topic from a different perspective on the African continent.  So again it's good to have this sort of contribution into this discussion. 

     We cannot take any questions anymore, because we're down to the famous last words, the screen in front of me is telling me.  I'll ask all panelists to make one short sentence, statement, on what we take home and what our potential topics we need to face in the next near future.  And perhaps as a topic for the next IGF for a workshop or a main session. 

     So let me start with you, please. 

     >> GUILHERME CARELA de SOUZA GODOI:  I see there is hard work ahead of us, but perhaps a topic, at least in Latin America, we are doing an interesting connection with judiciary power.  Justices and prosecutors are more and more interested in knowing about this, because they are receiving more and more cases on all of those issues.  So we should discuss more with them and discuss more these kinds of players in our discussions. 

     Thank you very much. 

     >> JOANNA BRONOWICKA:  I really like the comment that ethics is culture.  Because we don't have value judgments as well as we have a common Internet space, and we think we live in one Internet culture.  I think the idea of an ethics decision speaks to me.  A lot of these decisions will have to be made on a national level and within specific cultures. 

     >> MARIETJA SCHAAKE:  I take away from IGF and many other meetings that there should be more discussions between tech savvy people, engineers, and politicians and also legal experts. 

     But the other thing from this meeting that I specifically take away is that sometimes a topic is so overwhelming and large that we have to make it small and tangible.  And I think people really understand when it's about their personal lives and personal body.  So maybe we should focus the next time on, for example, medical aspects of the Internet of Things or Internet of Things in the medical field.  And then look at it more specifically to have kind of a case study and see what we can learn from that. 

     >> ARDA GERKENS:  I think we should all go home and start debating in our countries and learn that this is not about justtechnology marketing driven society.  But it's affecting our lives.  And we need to think about do we want this or not. 

     >> Y.J. PARK:  I also would like to highlight IoT is more about communication.  And so when it comes to communication, can't we sort of like trust whether machines can make ethical decisions on behalf of us down the road?  That will be the challenge. 

     >> BARRY LIEBA:  The take away is standards and interoperability on the open Internet.  Security and privacy built in from the start, and I'll call it open innovation.  Whatever IoT scenarios we see now don't come close to the world changing innovations that someone will come up with that we haven't imagined yet.  I'm excited about that. 

     >> GRY HASSELBALCH:  I'm from the data ethics, I was not introduced before, and behind the global privacy innovation here at IGF.  And of course, in that line, I want to say that I think that we need to start thinking of privacy and data ethics not as obstacles but as innovation.  And they need to be built into the design processes.  

     And then coming from what I was saying before, we need to have a public scrutiny of the algorithmic economy that is developing very fast without us following it. 

     >> WOUT de NATRIS:  Thank you.  And I think that ends our session.  That leads me to thank first all of the panelists on your exceptional views on what is actually going on, and that indeed it's overwhelming.  I think that is a very correct observation. 

     I would like to thank the interactive way that you participated in this debate as we heard a lot of different questions, a lot of different views from you in the room. 

     I want to thank Cootzee especially for making all of the trouble of getting the video to us, et cetera, and for IGF having this panel organized, and that we have had it here. 

     Lastly, to people transcribing, making it all the technique and the remote participation possible.  So applause for yourselves and the panelists and the technicians.  Thank you.


     (end of session 10:33)

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10

igf [at] un [dot] org
+41 (0) 229 173 411