IGF 2017 - Day 2 - Room XXI - OF49 Big-Data, Business & Respect for Human Rights

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> MODERATOR: Good afternoon ladies and gentlemen. If you are here for the open forum for big-data human rights, you are very welcome to take a seat. If you are not, I would really kindly ask you to leave the room so we start the session. Key have a tight session, 60 minutes, or a little more than 60 minutes.

Please stay your seats. If you can come forward, come forward so we can see you more clearly. Those of you who are not staying, please if you could leave the room, thank you. We'll start in a minute or two.

>> Welcome to the open forum 49 on Big-Data, business and respect for human rights.  This is a joint open forum organized by the Council of Europe, by the European Broadcasting Union. And also the Federal Department of Foreign Affairs of Switzerland.

We have a number of speakers which I will introduce you to shortly. We have only 60 minutes together, 60 minutes or a little bit more.  We can stay on a little bit further. And we're going to, you know, going to have some good discussions about the issues of Big-Data, what it means, the collection processing and extraction of data. And what that means for knowledge and what that means for personal data, privacy, these issues.

I want to do something a little bit unorthodox in the beginning of this session. I am not too sure it will work, but I'd like to spend the next five to eight minutes asking you to group into groups of say five or so.

I have one question to ask you, if you can try to group into five or so strong in numbers. One question before we start the session proper, which is Big-Data.  Are you worried about Big-Data?  Are you excited about Big-Data?  Or don't you know about Big-Data?  And why? 

And I want you to, we go to the panel with expertise in Big-Data, I want you to break out and ask straightaway maybe a few minutes, maybe you can help bring people together, and to answer one question. Big-Data, excited?  Worried?  Or don't know? 

You have basically five minutes and then five, six minutes, then we come back into plenary and we ask you your results. Okay?  Thank you.

[Group discussion]

>> MODERATOR: You have one minute to conclude your discussions and to be ready to speak. If you could identify someone to speak on your behalf of your group, thank you. One minute.

Okay, time is up so you can now break back into plenary, if I can ask you to conclude your group discussions and be ready to tell us your findings about your discussion about Big-Data. Worried?  Excited?  Or don't know?  And perhaps one or two words why.

And I am going to go from left to right if I may. There's a group over here who's still talking. Perhaps you can join us. Could I ask you to feedback from your group very quickly to the question. Big-Data, excited?  Worried?  Don't know?  Why?  Please.

>> AUDIENCE: We had two opinions in our group, excited and worried. So basically and so agree some of it was more excited than worried, and others more worried than excited. And the reasons are varied.

>> MODERATOR: Can I ask why?  Do you have any information why? 

>> AUDIENCE: The excitement is that you can really do something in research and all this. Research especially. But the voice were about power, who controls. Is it handled by computers?  The fear of personalized profiling and danger for democracy.

>> MODERATOR: Thank you very much. Is there a group at the bottom there?  Did you deliberate?  Thank you.

>> AUDIENCE: I think I wouldn't say the bottom, but okay.

So in our group we realized that yes there is a huge potential for Big-Data, how they can be used.

At the same time, we worried that those data can be misused, some conclusions used against people.

But one more point that we made during our discussion that in some areas, some parts of the world where they require, when they think about building infrastructure, when they want, they think about better Internet connectivity, they do not care about data yet. They care about something much more basic than that. First you need connectivity to start even thinking about collecting the data and using data. Thank you.

>> MODERATOR: First things first, thank you very much.

Is there a group there in the middle row?  Did you come together?  Did anybody here?  No one at all?  Did anybody deliberate here as a group in the middle?  Don't be shy. Please, thank you.

>> AUDIENCE: Okay, I think our views why very similar to the first group over there.  We were mostly excited with the touch a nervousness for very similar reasons. The nervousness being around, you know, issues like who's controlling the data. You know, concerns about data breaches which are prevalent. Will personal information be leaked out?  Will personalized profiling occur and be used for things like say raising insurance profile, insurance premiums or the like. Those are the areas of concern, but excited about the possibilities.

>> MODERATOR: Thank you very much. Anybody here, did you come together?  Yes?  Briefly, thank you.

>> AUDIENCE: Yeah, we had very similar kind of -- very mixed feelings about a lot of stuff. Also mentioning that personal data is becoming something that might not be possible because you're more identifiable through Big-Data and health.

And other things that we already do require that we have these data, so there's a lot of potential, that's good. A lot of worries.

>> MODERATOR: Thank you very much, thank you. I'll come around here to this group. Anybody here wants to speak? 

>> AUDIENCE: Cautiously optimistic was the result. And I would like the team members from the positive side to say something about the protectional, from your opinion, perspective.

>> AUDIENCE: I think we saw a lot of potential when this comes to Big-Data. Because one way able to leverage technology. And coming from a country like Kenya, there's lots of advantages for us to benefit from it.

And I think that given the move towards data protection, and the data protection legislation, the worry can be mitigated to some extent and so my excitement remains up.

>> MODERATOR: Thank you very much. Anybody else? 

>> AUDIENCE: Yes, I represented a more pessimistic voice of this group. We leave behind such a controlling amount of digital traces, especially with the invention of the Internet things, and the proper handle of the data is not guarantees and why we ended up in the cautiously --

>> MODERATOR: Anybody else?  Thank you. Any other groups which formed?  Anybody want to -- please.

>> AUDIENCE: Oh, yeah, we also had both camps onboard. Basically on one hand we see there's potential for Big-Data, especially for businesses to develop new personalized services with the data, with Big-Data.

And on the other hand, as a consumer or citizen, we are worried about the extent and volume of data collection leaving traces.  The key word was "transparency" here and also remedies for users to be able to access their data to object and say no to data processing.

When it comes to businesses, I think we also briefly discussed the question of data retention. And the question of who should pay for the server capacity, if I understood correctly, and the question of who should be able to access that data. Thanks.

>> MODERATOR: Thank you very much. I think we're on now to the final group, if I may, please.

>> AUDIENCE: So similar feelings. So the one hand we're excited, on the other hand we can be a little cautious.

I think the start of the discussion was that, you know, you can deplore Big-Data technology, I mean Big-Data as such. The quantity of data is not necessarily relevant, it is what you do with the insight that you get. That it can be exciting, and it can also, yeah, be from an ethical point of view or maybe a legal point of view be questionable, depending on the practices out there.

>> MODERATOR: Really thank you very, very much for the reflections. That helps us break the ice and get going.

I really want to now invite the speakers to come and maybe reflect on what you said when they intervene and with short statements to react.

First I want to say why we are doing this event and want to pass freight away to Giacomo Mazzone and Remy Friedmann freed from the EBU, and the Federal Department of Affairs of Switzerland.

>> This is the open forum that we shared with the foreign affairs of Europe, and we don't think the answer can be delivered by only one part of the picture.  We need to work together to find the solution.

What's the problem?  The problem is, as you know, the consumption of all the visual material and any medium material is more shifting from the linear world, the television, radio, the newspapers, into the online world. The on-demand world. So this is completely different in which we are experienced as broadcasters, media organizations, to have data that are sometimes very expensive data about our viewers, our readers, our listeners. 

For example, when there is a right-wing political party on television, I can say this person doesn't share this ideas. So what needs to be protected, so this for us is a big concern, especially as public service, we have to be attentive with this and how to deal with it.

And the main problem for us, and then I will pass to the others, is that when we discuss, we are working on the view of a charter, (?) All the members agree on the principle and then come back to us and say I can guarantee I will not share the data I own.

But when somebody comes to a program through a Facebook page, the data and information relevant going through a gatekeeper or through a third party, then I can engage for myself what happened to the third party or victim?  He will stay silent, he will keep the data as private?  Or even use for his own way.

This, for us, is a concern and we are looking for other part of the industry to see how they relate and how they try to solve the issue on their side.

>> MODERATOR: Thank you, Giacomo. Remy, do you want to say who your opinions?

>> REMY FRIEDMANN:  I am here because I am worried, but I am here because we have an action plan on business plan and human rights where we address how to implement the guiding principles on human rights what is the responsibility of the states to protect human rights?  What is the responsibility of business to respect human rights?  And do the communities have access to an effective remedy? 

The action plan was adopted in 2016, a year ago, on the eve of the human rights day of which we celebrated the anniversary this year. And we thought that since there are some indications that there are worries or concerns related to Big-Data, we thought that we should be here because our national action plan doesn't contemplate specifically the Big-Data, but it has level of mix, what we call a smart mix of legally binding and voluntary measures addressing the needs of different economic sectors. Looking at private security sector through the multi-stakeholder initiative, and the security whether it affects human rights. And developing commodity trade sector. And here looking at something emerging, an emerging issue that can have a dual use it can be used for good purpose, it can be misused. Something that goes across all sectors, not only related to I.T. sector, we are concerned as individuals, consumers, as businesses, as governments across the economic sectors.

Is Big-Data actually a commodity on which we have to define what could be a responsible use?  Do we need to develop specific guidelines? 

We are currently in the process of implementing our action plan, and at the end of this year we will review it. So in view of the review by the end of the year, can we identify the possible human rights impacts of the data?  And do we need to do something here?  Is there a potential of acting in a way together with all stakeholders, together with all partners, governments, private sector, civil society, algorithms, artificial intelligence and others.

>> MODERATOR: Thank you, that was a good introduction. I think it is now clear why we are here. It is good we talk about governments, governments to protect human rights in the convention, framework and pleased to welcome Corina Calugaru, the coordinator of on information policy and the ambassador, to the council on Europe. Thank you, Corina.

>> CORINA CALUGARU:  Usually talking about Big-Data, many stakeholders. Governments, businesses and the society.

So in general, when we are talking about that, it's important to understand that the Council of Europe is not new in this regard, has few instruments that are already in place. One of them is the strategy on the Internet governance, and the main objective of this, one of the main objective of the strategy is to set up a platform of cooperation between business and Internet companies and the process was finalized by signing the extent of letters with the (?) And Internet companies on the 8th of November. Now we have eight companies and six associations that try to work together in order to ensure a more protective way of working on Big-Data.

At the same time for us it was really important to see the willingness of the Internet companies to work with the Council of Europe on issues that are concerning all of us. And it's quite important to have an exchange of understanding and exchange of information between the governments and the Internet companies. Because usually the Council of Europe, the intergovernmental agencies, are issues and drafting, adopting international instruments not just from the perspective of the governments, but at the same time from the perspective of the national stakeholders that should implement all the instruments.

At the same time we have another instrument that is already implemented and is open-ended for other countries, not just the members of the Council of Europe. And I am talking about the Convention 108 that is dedicated to data protection. And the Convention currently is being modernized and effects by exceeding the state's potential call to become a global treaty for data protection.

In general, when we are talking Big-Data we have to take into account that we have many actors, but at the same time this knowledge is ensuring the way to think about the new mechanisms, about the new procedures, and even more about the rights that have to be established, because we have many stakeholders that are in this process.

And at the same time we have new even knowledges whereas new terminology as privacy by design, privacy by default, for the governments, and especially civil society it is difficult to digest. When we have new instruments it is really important to have understanding between all and to be effective and protect our human rights, rule of law and democracy.

And in this regard, we have as well the new guidelines on the protection of individuals were regarding to the processing of personal data in the world of Big-Data that was adopted in January of this year. And I think on this issue we talk more with Alessandra Pierucci.

>> MODERATOR: Thank you very much it is quite clear it is historical in the Council of Europe, we have companies and governments coming to the table, and the discussions this year and other years.

And Alessandra Pierucci the chair of the Convention 108 of data protection.

And talking a lot about Big-Data. And can you explain in your introduction? 

>> ALESSANDRA PIERUCCI:  I will try to do that, thanks very much. Yes, I am here as the Chair of the committee on data, the Council of Europe instrument for data protection and it has two unique characteristics. First of all the instrument at the instrumental level. And it has no character as it was reminded before, yes, it can be ratified by not only members of the Council of Europe currently, but rectified by parties.

And as said, it is undergoing an organizational process.  I just want to emphasize two or three aspects of the modernization of that process, which is very relevant for Big-Data.

First of all, transparency. It has been recalled before as important need to protect fundamental rights and new organization convention, the duties of the controllers to ensure transparently. Means other subjects must be in the condition to understand what's going on with his or her personal data and the consequences ever -- of the processing.

The second element mentioned before by the audience, the new convention introduced new rights, like the right to object to the processing, the right not to be subject to the ultimative discussions. And the processing I understood used by the new convention, again this somehow answers to the consent related to the possible capacity of the data.

Plus data protection impact assessment and privacy by design have been introduced as new elements of the Convention.

As mentioned before the consultive committee adopted in 2017 guidelines on Big-Data. Let me just spend one word about the kind of approach that the Council committee had about Big-Data.

I was trying to figure out after Lee was asking the question, are you worried?  Or excited about it?  What would be the answer by the committee?  I would say it would reflect more or less from before, the feelings. One hand the awareness that Big-Data can create a lot of innovations, social, etc. But at the same time the awareness of the possible risks which may be caused by Big-Data in terms of marginal inflation of the individual in the decision-making process. Or even discrimination as it was mentioned before.

In respect of your question, Lee, whether Big-Data should be considered personal data, well again, that was something that we discussed about a lot and the answer is actually given by the guidelines themselves when it says that not always Big-Data are personal data, but in good occasion, in a large spectrum of cases it is the case. It is personal data. And it is most very, very important to ensure that all the safeguards for the protection of fundamental rights are adopted.

And let's say that the guidance of that was adopted by the committee is a general one, which means that it will be possibly complimented by other instrument, in particular sectors.

And the idea of this guidelines is to promote, and I am going back again to what was mentioned before, promote let's say an ethical and socially-aware use of data by asking those who are elaborating such kind of processes to evaluate in advance the kind of impact also on the social point of view, Big-Data on individuals. And I would also say on society.  It's kind of preventive approach was emphasized also by the fact that the guidelines, as in the Convention, required that a controllers to again carry out an impact assessment before the commencement of any processing.  Eventually to evaluate the risks of Big-Data and to eventually adopt the appropriate safeguards which avoid discrimination and potential impact on fundamental rights.

Just to conclude, the guidelines also emphasize the role of human intervention in this kind of need processing. The idea is that the subject should be able to at least challenge a decision which may be completely obscure and opaque. And I think that I exhausted my allotance, thank you.

>> MODERATOR: Thank you very much. And we will move strictly to the business sector and back to business. We have Philippe Cudre-Mauroux from eXascale. From the private sector point of view, what is your take on this?  I know you have some examples to share with us.

>> PHILIPPE CUDRE-MAUROUX:  Sure. I have a couple of slides and I don't know if I can close it down.  My goal in less than five minutes is to give you an overview of what we do in Big-Data these days. And I am a technical person and I have been working on it for about 10 years. And let me give you the big picture in terms of the pipeline and techniques.

Next slide. So that's the idea, right.  The idea is very simple from an industry perspective.

You have data, or we have data as like companies, right. Any reasonable company has a website, customers and data.

If you get married, this data with the right technology, what you want to get from Big-Data is what we call "actionable insight" you don't want an average, a variance, you don't want a mathematical formula you want insight on how you can act. You want to basically make more profit.

Next slide.  What can you do with Big-Data?  Do all sorts of things listed here. Reporting, money factors, and what people focus on these days are the two slots, classification and prediction. Classify items into various classes. Talk about customers, you want to customize customers into classes.

And you want to predict, it is impossible to predict the future, but do a model based on data and then basically have likelihood, probability on what could happy basically.

Next slide.  That's the big wave right now, to basically learn those models based on past data.  That's what we call machine-gunning, what we call artificial intelligence these days. 

The idea is simple. Take the fast data, and what we call you supervised learnings.  You build on the data we stored. 

There are three generations of models you can use. Yesterday we are doing prescriptive model. And they can describe the world.  The big wave is on predictive models, you want to predict the future, right? 

What we are trying to do a lot with these is the prescriptive models. Basically the models which can describe the phenomenon, predict, so you can take some action.

Next slide. That is how this works from a technical perspective. I spare you the details, but different sources of data on the left. We ingest the different sources of the data into what we call a data lake and we store all the raw data in this data lake. And then based on this data lake storing all the data, we will interact with the data, visualize the data, and in the end create those models on the bottom of the picture.

Next slide. In terms of the eXascale story, why we are here today, modeling users has been a very big and successful story, right?  You think in terms of Facebook, Google, that's basically how they get most of their revenues. They model their users using their personal data to place some specific ads, right?  It's also what you can do online for in the stores to place the products, and can use personal data to decide what kind ever content you generate. Like Netflix, personalized learnings, all sorts of applications.

Next slide. And the key issue from my perspective, it is a bit technical, basically what I describe to you so far is the typical process of data science which we do today which is interpretive. We don't know in advance how we will build the model, and don't know kind of model it will be. We have to dozens of times to build a model which can be used to identify business elements.

This model we do with the data, totally conflicts with privacy rights and what we call in law the principle of purpose, which is exactly the opposite way around. You want to basically ask companies, say inquiry what they will do with the data. This is totally impossible today because we don't know as scientist what is we will do with the data. And we have to try several times and then gentle the model. And I think this is one of the major issues in this space.

The final slide.  I see various solutions to bridge the gap between the rights that we want and the technology we have today.

First, previous speakers have already talked about it, we need transparency. We need transparency yesterday, right.

I know a bit what the big Internet payers are doing with our data. Most people don't know. We first need transparency.  We need to know what people do with our data, such that we can start the discussion. As long as we don't have transparency it will be very difficult to talk about those topics.

And then we need to work on techniques which are better in terms of preserve your privacy, and that's what I am doing in my lab in Switzerland.

And in the future, it's more like science fiction, we need some automatic way to basically broker the data to basically okay, I am willing to give this piece of data against -- to get this free service, right?  We need some automated agents so do that. That's all for now, thank you.

>> MODERATOR: Thank you for being so honest. It is so concise, thank you very much.

You mentioned a key word "broker."

Brokers exist, send on and sell data. And personal data I understand. On that note I want to pass to John Morrison, the Executive Director for human rights and business. And reheard some of the roles, John, what is your take? 

>> JOHN MORRISON:  Thank you it is a pleasure to be here. I will end on the issue of data brokers briefly.

But maybe reacting to the three things you said already and I heard the panel say. Observation number one this is too big for any one big easy answer. Big-Data is hugely complex. It is mostly like saying do you like food?  Yeah, I like food.  It's good for me and it's bad for me.

The second is the existing business in human rights framework, the tools we have in the human rights box. Remy mention the human rights framework.

These things are stretched to the limit when we come to these questions. The third observation is on this issue every company is an ITT company, and it is only the ITT companies talking about the responsibility elements of Big-Data. Very few businesses in any other sector are talking about this beyond data protection. Data security they get that, but they don't talk about much else.

And D, where we're going is 70% of this will be business-to-business data traffic. The so-called industrial Internet. And most of those companies do not see this as a human rights issue, that's all.

So I think moving forward we have to put things in buckets. Buckets we can get our arms around. Short-term, medium-term, long-term risks. And I probably should say, of course, this technology delivers a huge upside, both commercially and human rights terms. And I haven't got time to talk about the upside, I will go straight into the risks. But take it for granted there is an upside. Block chain, there's a lot.

In terms of risk, the short-term risk where people, policymakers and other people seem to be in the game is data security, data protection. There are laws emerging.

The medium-term risks are some of you in the room and some panelists are hinting at, where the data and algorithms assault privacy and deliver business solutions perhaps not envisioned in the beginning.

Think of recruitment, the acts, choosing a tenant if you are a landlord. Don't assume for a second that algorithms don't discriminate.  The algorithms are designed to discriminate, that the purpose is of the algorithm. What happens if the algorithms discriminate overtly on covertly in res that would undermine human rights norms? 

The second example is consumer data. Consumers handing over huge amounts of data for years in exchange for a number of coupons. What happens when that profiling is used to steer their behavior and influence them? 

Third example, facial recognition technology has a lot of upsides, but what does that same technology mean in the hands of regressive governments?  And some of whom are pouring huge amounts of money into these developments? 

And then the longer-term challenges are almost existential. And I think the last speakers of human agency, whether we can find an automated response to navigating some dilemmas, I wonder as we remove humans out of the process, the debate around drones and killer drones, what does it mean to create technologies where a human being is not involved in a decision to kill someone?  The fundamental right to all human life, the right to life. That is the framework of the last years. The damming effect that the data is held and not knowing how it will be used now and in the future will have on freedom of expression around the world.

I think in the longer-term while the technology is hugely positive, unless we get our arms around it, will have a hugely disruptive effect on the international business, the international human rights framework.

The way forward, law will help us to some extent. Law is important. Council of Ethics collective action around some of the concrete dilemmas is important, I think, as well.

I think the particular role of data brokers, people who make money out of the selling of data needs to be explored and heavily regulated. I would say that is a weak link in the chain we don't talk enough about.

And I think it is more about transparently, about consent. Free and informed consent about how our data is used. Until we come up with meaningful consent, not tactic consent we won't bring the population with us for this debate. A very brief outline of ideas.

>> MODERATOR: Thank you. Worried?  Excited? 

>> I am like Remy, I get excited when I am worried.

>> MODERATOR: And Microsoft, Bernard Shen, the big player, what is Microsoft's take on Big-Data?  Can you enlighten us? 

>> BERNARD SHEN:  Thanks. Thank you very much for the opportunity. My work in Microsoft focuses on human rights issues in cloud services for expression, privately and AI implement rights.

I will address two things.  How business can think about the impact of the use of AI, Big-Data, human rights and how we address and think about it, mitigating the human rights risk.

And secondly, how we also as a society can truly unleash the potential of AI for good.

And we did a helpful comprehensive discussion, and for me I try to think of four elements. I think of the use of Big-Data in learning and AI to help us in the following four components.

First, you start with a human purpose. Artificial intelligence cannot tell itself what it wants to accomplish, so people decide as the first instance what we're trying to accomplish, the human purpose.

Then you have to decide what is the relevant data that will help you. Find and use relevant data sets. And feed into the mathematical models, machine learning, and learn from it creating a model that can help you then make future decisions as future data comes in.

What I mean by "mathematical model" I try to understand by thinking back to high school math, you know.  We all sat in class and looked at the graphs, X-axis, and the correlated relationship between two variables. Told to plot it on the graph, one variable X and Y-axis the other variable. Plot it on the graph, draw a line through it the algebraic equation and X to Y. And I try to harken back to those days to think about it the mathematical model to explain the relationship between variables, teach you what the relationship is, and then help you make decisions going forward as more data comes in and X is that, then Y is likely to be this.

And then that's the outcome, you have a model that can help you make decisions going forward because past data taught you how to build this model.

Because of limits, limitation of time, I am going to quickly talk about two of the things and how it can impact human rights. In such a human purpose, obviously the first question business should ask, what are we trying to accomplish?  Are we trying to use this technology in a way that is for legitimate, good purpose?  Or, you know, are we trying to do something that is, you know, do harm or even illegal?  Obviously we don't want to do that.

Even where your purpose is positive and constructive, there's then the issue of whether there is I am perfection in application of science. 

And then we look at data. When you want the science applied rightfully and appropriately you look at what data you use to feed that learning process.  You want the data to actually be comprehensive and representative. Don't want it to be limited and biased.

For example, if you're talking about an employer trying to build a model to help it make hiring decisions, hiring new employees, if this company only looks at -- draws on existing data from past employees to train this machine so it can help it make decisions going forward, the first question you should ask is, well does an existing or past employees are they diverse?  Are they representative?  Or are they traditionally non-diverse along racial gender lines, etc.?  So that it really is not a good representative data set? 

What it points to, it's an interesting question as to what data you should use. It's understandable that when you look at these issues and you are concerned about racial discrimination, for example, your initial instinct might be well you better not use race data in your analysis.

But what it turns out in various science and research, it may not be actually the right answer because if you do a data-learning exercise and have a number of fields. In your employment you have the employer, age, gender, race, home address, etc., if you remove race, for example, home address is still there and there's a high probability that there's a fair amount of correlation between where people live and what their racial ethic background might be. You think you remove racial bias removing that field, but address is still in there.

Now as you learn from the data and you build a model, the bias, inappropriate bias may still be in there because of the home address of the employee is still in there, past employees, and you are actually influenced by bias decision but not realize it. Thinking you have a racial-neutral approach when, in fact, you haven't.

I think it teaches us it is not always as simple as it seems. You might think less is more, but in this case less is not necessarily more or better. Less may indeed be less.

So it just illustrates the complexity of data science and we really need to look deep into sometimes the first instinct or assumptions.

So in terms of how Microsoft tries to get better in addressing these issues, a number of things the C.E.O. not long ago outlined six principles on the use of A.I. technology. Must be designed to assist humanity, transparent, maximize inefficiencies while respecting human dignity. Must respect privacy, must be accountable for humans can step in and undo intended harm. And lastly it must guard against bias and inappropriate discrimination.

The other things we do, we realize that we cannot do this appropriately and uphold the principles alone.  We together with a lot of other companies formed this partnership on A.I. to learn from each other, develop good practices so we can use A.I. responsibly.

And finally last point, because on the session description talking about principles, we fully recognize that we need to live up to -- committed to the (?) Principles and we acknowledge we are internally conducting a human rights impact assessment on the use of A.I.

>> MODERATOR: Thank you very much. That's a very responsible approach, thank you very much. You have been very efficient and concise in your words.

I think we talk about data protection, privacy, this is also about data governance, and the shared approach which you mentioned, that we need to come together on the same page and have the same common approach to governing data in all of its forms.

Now we have six minutes left till the end of the session and I would say 10 minutes more if you wish to stay on, which gives us 16 minutes.  I will open the floor to you, a chance to speak to the experts. Let's have a discussion, but you already discussed and let's come back to you. And then come back to the speakers and then go with the three organizers.

Anyone want to come in from the floor? 

>> AUDIENCE: I am worried about the fact that we always look at human rights as if it was just about my rights. And a difficult situation I live through now on the Internet Big-Data is used, affecting us all in daily lives, is the suggestion economy.

And that suggestion economy is driven by a massive invasion of privacy. But the tool is not the individual directly, mostly it is the Judas principle. Some of your friends get free gift. It is not my data that I am asked to give, it is other people's data. And what the -- what is being done with that data seems to me to not match what we just heard, of somebody trying to analyze it and get you a good conclusion, but this behind is a robot making decisions. It is not a human making decisions, it is a robot. Actually many of them, but let's say they tend to be the same thing.  And they make those suggestions.  They make so many discussions that no human person ever, nobody can understand what this thing does. It creates so many situations to be analyzed, maybe hundreds in a million, but that's not going to make a difference.  We don't know what the other decisions are that these robots make.

They are actually already in a total loss of control. But worse than that, it makes money. And, of course, if a company like one of these big companies that work in the, in social networks does not do that, does not use it, it is going to be eliminated in a competitive environment.

So even without desiring to do so, it is going to go on and do as much as it can in collecting data and making suggestions.

>> MODERATOR: Do you want to come in? 

>> PANELIST: If you look at the composition of the panel we invited companies that are not making the more business through data. There is all range of companies that can use the data but cannot -- they are not willing to exploit the data to make money out of it immediately, but probably to like better services to the audience. That's the problem.

I would like to hear what are your suggestion of how we can go with that direction. For all the companies that are not willing only to use simple effects. And the business industry is important that we go beyond that.

>> MODERATOR: Thank you. Anybody want to come in? 

>> Thank you very much. I would like to come back to this artificial intelligence and bias. Because the only way to remove bias would be to remove the human dimension. There is absolutely no way that you could keep that dimension and not have bias. 

Basically two opportunities. One, artificial intelligence powerful enough to take into account all Big-Data and remove bias, and then we are left with what we get.  But we as humans understand that's another question.

And the other one is if the other panel on the artificial intelligence was saying the ultimate goal of the artificial intelligence is to give it the capability of humans, my question is if this happens, we give cognitive abilities, emotions, should we consider expanding the human rights approach to including the artificial intelligence rights approach. Thank you.

>> MODERATOR: We're going deeper, deeper. John do you want to quickly --

>> JOHN MORRISON: I worry about your first premise of somehow removing the human bias. From what I have read and understood about the way this is going, to assume that this's any point in the future where removing humans is removing bias is illusionary. And they are hard-wired, through assumptions, the way machines learn. Never I think any correlation between automation and removal of bias is dangerous. And I think because somehow mathematics and science that bias doesn't apply or eliminate the fallibility of humans, I think it is a dangerous correlation.

>> If I may come in, I think, you know, understandably a lot of anxiety about machine, A.I. diagnosis-making being biased and I think one way we can think about it is that there seems to be an underlying assumption that human decision-making may be superior to A.I., but human decision-making can be the ultimate black box. And one, they may tell you they are telling the truth and are lying, or they may not know they are not telling the truth. And the decisions that humans may not know, but data can reveal, we have the opportunity to do something about it. We can't even begin to solve a problem we don't know about. If we apply science to look for it we have the opportunity to address it.

And that's just the beginning of the potential benefits. I know it the natural to think of A.I.s only for business, but even in the context of business there's a lot of public good that's possible.

You think of the environment. A power company that what about use resources so it is less waste, more efficient. And farmers with agriculture, increasing the yield to feed the world's hungry, at the same time using less resources. That has tremendous impact on people's lives.

But our formers for-profit business?  Yes, they are, but there's a lot of global beneficial impact that comes with it as well.

>> MODERATOR: Okay.

>> I would like to comment quickly on the bias from a technical perspective.

Basically impossible to build a model without any bias, right.  But what we can do, so I hope there is still hope, we can run experiments for a particular type of bias and can try to correct that, right. We have this feedback loop again on this approach which can improve the middles and that's the state-of-the-art.

>> And come back, that's why I was asking the question. I come from the background where most think A.I. will help. But nor some it is not absolutely necessary. Not to control but to direct it in a way to correct the bias that is already harmful, that's what we know.

>> MODERATOR: And you can't to come in, please.

>> It's hard to make a vision, prevent a vision in the rest of two minutes here. I think the basic problem now the whole procedure of the technology used is very business-driven. And we do not -- we did not develop business models that are without data use. We could have.

There is a long tradition of the discussion on technological sovereignty. It is not really followed up. There was no real support by, for example, the E.U. in certain research projects. Not support in different forms of using social media or setting up social media. So the alternative to make a real good vision takes a little bit longer here than it would -- this is not possible now, but I think what we have to be aware of, I don't think it is as much about bias and A.I., it is about as soon as you have a personal profile, this profile can be used against this person in a political way. It can be misused in any way. It is just who has the access has the power to misuse it.

And this is the basic problem here. And therefore, we have to be really extremely careful with personal data.

So we all agree on these nice examples of agriculture, as long as it is not against the guy who is actually, what is a peasant, right?  So if he still owns it, then it's nice. And again if people who own the data make him unemployed, it's also a problem. But in general, this field is the nice part, the agriculture part.

But the personal data, the personal action, this is a democratic problem and we really have to think careful about that.

>> MODERATOR: Okay thank you. Any other comments from the floor?  Anybody else want to come in on this decision?  Please, thank you.

>> AUDIENCE: Thank you. My question is more a question. Where are the ideas going for regulating this?  In other words, you've got this treaty from the Council of Europe, but are there other ways in which people are addressing effectively a regular -- regulatory situation that can address these challenges? 

>> At that time my experience the European Union level there is a discussion going on on the impact of new technology, Big-Data, artificial intelligence, on human rights.

Of course, it is something which has just started. And I have the feeling that what is caught in the moment is probably the more problematic aspect rather than clear solutions.

Let's say from our side, from the purely data protection perspective, the answer, we try to give some answer on the guideline side I mentioned before. And I couldn't -- I just want to echo what has been said by John before about his concerns to exclude completely the human component and the decision-making process. And I think that was very much the intention of the guidelines, created towards that kind of direction we're taking.

>> MODERATOR: Thank you. I am looking around and don't see any more comments. And I am going to cue up -- are there any more remote?  Anything more on remote come through, Peter?  Not at all?  I will cue up our colleagues to give some final wrap-up comments. These are the people which actually brought us here together today.

But I would say on my own behalf, that Big-Data, the horse has bolted. It is out there, it has been done, it has been selected and stored and used for analysis, etc. Big-Data analytics. And companies don't have an obligation, they have to respect human rights, but states have a duty to protect people in their private life, and that means the protection of their data. 

So we have guarantors, those that don't have to comply if they don't want to. Then comes down to enforcement, states rules and companies. And the propensity to turn anonymous data into personal data.

And I don't know, it is a great propensity to turn your data which has information about you into -- collected with other data and it makes you you and that propensity, I would like to know more about the risks is. And building a risk assessment, but with Microsoft, is it enough?  Or do we have to go with enforcement.

And I would like to conclude maybe Peter first.

>> Thank you very much. I would like to thank the organizers. And I would like to thank the panelists.  I think it was an extremely rich and informative panel and I would really like to thank the audience.  You were actually very great. And I really -- I think I can be cautiously optimistic on the condition that human oversight and human control will remain around this question. And we are when implementing this technology.

And I think risk potential has been really extensively portrayed here as to the legal framework or the current legal framework which is very important and relevant.

And I also think, and the panelists touched on the self-regulation in and exchange of best practices when it comes to technology, business practices with the really recommended here, as well as creativity.

This is why I am cautiously optimistic, because I heard a lot of creative idea how to tackle this risk, which comes with this potential. And I would like to conclude by saying that Council of Europe and data protection, always remain open to bringing this thinking further and to incorporate either in the framework in the exchange of letters and come a continues or framework with other organizations or stakeholders. Thank you very much for everybody.

>> MODERATOR: Thank you. Did this meet your expectations? 

>> It is always bigger than what you good. Not what you do tomorrow. So I think that I got a lot of excellent feedback.

The first feedback is that the mix, sensation we have when we to deal with data are the same feeling from your perspective. It means we are right on the point.

An opportunity that is too good to miss, but we have to be very cautious. Because I think we are missing the point of trustness. If there is no trust, there will be no problem. For all technological things it is like that. 

The credit card were not flying billion the data was not safe until like 99% that you would not be stolen. And I think the use of the data are the same thing.  We will not really have an environment that will relation built with people if there is no trust at the end of the day.

So I think this is shared the feeling and I think we have to work a lot on that. I think that we can restart this discussion at the end of February when we have the big conference of the public service in Geneva, and I hope that come of you can be back with us, the discussion there, because we need all the expertise and you will a the advice, thank you.

>> MODERATOR: Remy, you have the final word.  More excited?  Or worried?  Or don't know? 

>> REMY FRIEDMANN:  I am excited under time, we provided a lot of data. And it was less bias with me when I was writing down the notes when we were talking, so I am really happy to have had this wonderful organized panelist, and very active audience in this interaction.

I think there are some elements here we have identified risks that we all are aware or conscious us. And maybe translated to the next generation. The next generations would certainly -- actually we are born with the technology to have this trust, and the trust in the machine is not necessarily something that doesn't come without risks. And if you can do some work now and also address some of these issues across different economic sectors, raising awareness that also companies that are not I.T. companies, and they also have a responsibility. If you can engage in a dialogue, and from my perspective from the Swiss government, together without the representative, and across the government and also with sectors that are of importance here, also engaging with other governments active, international and intergovernmental organizations on how possibly the business human rights framework can be useful to address these risks and maximize positive potential, the use would be better, I think that's a good program and an ambitious one and possibly excited.

>> MODERATOR: Thanks though the speakers you were eloquent. And thank you to you for staying on for questions and dialogue, the meeting is closed, thank you. Bye-bye.

[Applause]

>> Just one announcement. We are looking for a black lady's purse with a silver clasp. If you see anything close by, can you please let us know?  A black lady's purse with silver clasp. Thank you.

[The session concluded at 18:34 a.m.]