IGF 2020 - Day 9 - BPF Data and New Technologies in an Internet Context

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

     >> MODERATOR: Hi, I will host this session for IGF.  I have a question.  And we will wait for two more, is that correct?  Or can you tell me, please?

     >> WIM DEGEZELLE: We are waiting on at least one panel lead.  So I would say let's wait.

     >> Okay, of course.

     >> WIM DEGEZELLE: Let's wait maybe a minute.

     >> I wanted to tell you this session is recorded and hosted under the IGF Code of Conduct and UN Rules and Regulations.  And the chat feature is for social chat only and only, the Q&A feature is used to ask questions.

     >> MICHAEL NELSON: Thank you very much for all you are doing for us to make this work smoothly.  It has been quite impressive.

     >> WIM DEGEZELLE: There might be issue with some people have a problem with registering for the session so that might explain why.

     >> MICHAEL NELSON: You are talking for the first five minutes?

     >> WIM DEGEZELLE: Titti and I.  If needed, we can start.

     >> MICHAEL NELSON: We might as well start.

     >> WIM DEGEZELLE: I will start and then see if in the meantime our panelists and I see --

     >> WIM DEGEZELLE: I would say, Titti, if you want to.

     >> MODERATOR: I can start?

     >> WIM DEGEZELLE: Yes, I would.

     >> MODERATOR: Okay, good morning, everyone, and welcome to this session on BPF on Data and New Technologies. 

     My name is Concettina Cassa.  I work AGID I'talia governance.  And I'm the moderator and co-facilitator of the BPF Data and New Technologies in the Internet Context together with Emanuela Girardi from Pop Artificial Intelligence and Wim Degezelle, Consultant of IGF Secretariat.

     So before we start the discussion, I want to share just with you a few words about the Best Practice Forum, on what is the BPF.

     The Best Practice Forum actually are part of the intersectional activity that I would like by the MAG is -- it is up to the MAG members to propose a BPF and then a process is started. 

     So the MAG discussion analyzed the topics presented by the MAG and then approve.  And then after the approval by the MAG, the BPF can start.

     So what is the BPF?  It is actually a platform to exchange experience and also to address internet policy issues.  So the BPF follows an open and bottom up in the collaborative process.  It relies on community inputs.  And also as the objective to identify best practice on internet policy issues. 

     So the BPF actually was during the year discussed on this issue.  And then it coordinates with this session that you are participating here to collect input.  And then a report is issued at the end under the discussion of results collected by the BPF.

     So but coming back on the BPF on Data and New Technology and Internet Content, this BPF is focusing on how user data is collected, analyzed and used and on the best practice for which data is used to bring benefit to our users.

     And let me share also that we actually have three session.  The first one where Wim Degezelle will share the main point of the discussion this year on the BPF.  And also on the case study we collect and how it discussed the main collection of data through these new technologies.

     And then we have Michael Nelson, Director of Technology and International Affairs at Carnegie.  And he will share with you how we need new mental mind set, how our views are changing, COVID-19 impact on the user collecting and use of data.

     And then at the least of the session we will have the discussion on the next step on how to address these issues in the future.  So at this point, let me give the floor to Emanuela for the next section.  Emanuela, the floor is yours.

     >> WIM DEGEZELLE: I see that Emanuela is not yet joining so she should be there, she is having some problems logging in.  Let me already jump to the next point and say a few words on some of the activities.

     This is Wim Degezelle.  Let me say a few words on some of the activities we have been doing within the BPF.

     Let me share my slide.  I have a short presentation, not too long.  Because I wanted to highlight one point.  So the Best Practice Forum, like Titti explained, we started discussing already early on in this year, I think it was in the end of February.  And had regular meetings to, first of all, discuss what we want to talk about or how do we want to define, further define and refine this issue or this task we got from the MAG to discuss best practice and start to look for the best practices.

     One of the points that came up early on was about definitions and concept.  The element or question was what exactly are we talking about or do we want to talk about?  Do we use the same concepts and words?  Do we use the same words for different meanings?  There was a point that was flagged early on in the discussions, but I'm sure that in the next session that is one of the elements Mike will come back on during the roundtable. 

     I wanted to use this short time to flag or to show one of the outputs of those early discussions.  And that is the BPF data and new technologies issues cards.

     It is a short document, two pages, that maps different issues and challenges, concerns that came up when -- during the BPF sessions we asked for, but what exactly now are the issues you think about when we link or discuss data new technologies and what happens with the users' data.  I think we spent two or three calls on discussing these issues.  Also put it online on the mailing list to get additional input.

     So what is the issues card?  You have the different concerns, different questions, formulated as questions also.  Why?  Because we want to have them as a tool to help people to discuss issues and to have a dialogue and start up a dialogue on specific issues.

     Because we hear a lot about there are problems with data and concerns about data, but very -- at very few moments people can -- will be specific and they will all mention data as a problem or an issue or as a concern.  Or even data sharing, data collection, storing data.  But even during the discussion people will not really know or not really specify what they are talking about. 

     And that can lead to a little problem where also other different people around the table think that they are discussing the same thing, but, in fact, talk about something different.

     So the issues card is, like I said, a tool to foster dialogue and helps to raise awareness and should specify and focus the discussions.  So the question is for who?  I think the issues card could be used for -- by almost everyone, whether policy people or even politicians sitting around the table discussing the issues.

     But I also notice that it can be interesting for individual users like you and me if you think about your mobile phone and said oh, yeah, I tagged something or published something online.  And I know there are specific data-related questions or issues linked to that but working on the project together this issues card also helped me to just raise questions for myself. 

     What happens with my data?  Do I know where it is stored and things like that.  And probably as an individual you can look to the issues card and go okay, are there questions in there I should be concerned about?  Or are there other things that I said well, basically there are -- they are interesting, but probably not something I should be concerned about.

     So where to find the issues card.  We hope that it can be used as a standalone tool or standalone document for like to foster or to help building discussions.  So it is available in the report, but also there is a standalone version available on the website.

     Thank you.  I hope it is a useful tool.  And then I see that Emanuela has joined us and I would like to give the floor to her to take over and lead the first section of our workshop.

     >> EMANUELA GIRARDI: Hi, everybody, sorry, I got some problem in the connection.  I get confused with the timing with UCT, but I'm here now and we can start.  Sorry I missed part of the initial presentation from Wim, but I know that he probably gave a very good presentation about the issue card and the novel view of how we develop the process of the BPF and we reach to this point to the final discussion today.

     Okay.  So I'm here today with some very distinguished guests that I'm about to introduce.  I already see Veronica Arroyo, she is a Latin America Policy Associate with AccessNow.  And she will present as part of the case study that have been submitted today in -- during the process in the last month.

     And then we should be having in a few seconds Ricardo Chavarriaga.  He will be joining us.  He also got confused with the timing.  And I haven't seen Cathal McDermott.  Has he already joined us?  I don't see --

     >> WIM DEGEZELLE: Yes, he is, but maybe he still has to be upgraded to panelist.

     >> EMANUELA GIRARDI: Okay, good.  Okay.  So the idea today is to -- I mean like a good presentation about the issue card and the best practice using data to benefit and not to have misuse. 

     So today I would like our guests present us briefly the business cases they presented on this theme and talk about their experience.  So Veronica is already about us, I will start with Veronica. 

     Veonica With AccessNow.  She submitted a case study for some dos and don'ts with COVID-19 tracing.  What I would like to ask Veronica is if you can first introduce briefly your business case. 

     And then in particular, if you can tell us which one are the main risk related to users' data.  And in particular if you can tell us which one are the best app globally that are kind of better balancing the dos and don'ts that you listed in the case studies that you submitted.

     Please, Veronica.

     >> VERONICA ARROYO: Thank you very much for this opportunity. 

     I will introduce first AccessNow.  AccessNow is a digital global organization that has been working for the past 10 years extending and defending digital rights in the world.

     And this time I'm very proud to present the dos and don'ts guide for lawmakers or for States regarding contact tracing and digital contact tracing.

     This document is a very comprehensive one, and I feel very proud of it.  Why?  Because it was built by all of the people at AccessNow.  We are global team and have technologies and advocates, lawyers, designers. 

     All of the recommendations are very broad recommendation and based on the data protection principles.  And also they face the reality that we live because we live in a very diverse world and it is necessary to reflect that on the recommendations.  So we have eight dos and seven don'ts.

     I'm sharing this on the chat so you can check it out.  I really encourage you to go there and check all of the recommendations that we made. 

     For us, it was very quite important to have this document even if we don't believe that our digital contact tracing can solve the problem that we are having right now that we dealing with.  And also it raises a lot of red flags on privacy issues.  We felt that it was important to give this kind of information to the governments and also shape the discussion which is quite interesting, and that is why we are here.

     We use it also -- and that is what you can find on our submission and in our particular case here in Peru.  Because as you might know, Peru wanted to have -- announce publicly to have a contact tracing, digital contact tracing app.  And we use it to talk with the government, to start talking with the government because the government is not quite open to talk with Civil Society but this time this helped build this bridge between the government and Civil Society and also to participate on all the debates that we have here.

     The good news is that at the end of the day, we don't have this contact tracing app here in Peru.  For us, it is kind of a good news thing.  And when you are talking about -- when you asked about what are some of the risks of using data, at least in this moment of pandemic for us, it is a misuse when you change the purpose of using this data. 

     That is kind of the big -- one of the big flags that we have because now they are using -- the governments are very positive on using this data for the pandemic to combat this moment, but one of the things for us is quite important is to see what they are going to do in the future.  If they are going to still use the data, they are gathering right now and processing right now for other purposes because there is a lot of sensitive data.  And I think we have to pay attention on that specific point.

     So I stop here and continue on the -- I can explain more on the questions part.  Thank you.

     >> EMANUELA GIRARDI: Thank you, Veronica.  Very interesting.  Also in Europe we have several of these app, we have one in Italy where I'm located in Muni, but we have several different apps in Europe that were really taking into consideration the privacy issues that you just raised.  Thank you. 

     I think that Ricardo Chavarriaga just joined us.  Welcome, Ricardo.  Ricardo is a Senior Scientist at Zurich University of Applied Sciences and is the co-lead of CLAIRE Task Force on AI and COVID-19. 

     So Ricardo, I would like you to briefly introduce as the CLAIRE COVID-19 task force.  And then also to tell us which one of the main challenges related to the data collection storage and labeling that you faced -- we faced actually during the task force experience.

     And then if you can also give us some of the recommendation based on this experience for the policy makers.  Please, Ricardo.

     >> RICARDO CHAVARRIAGA:  Thank you, Emanuela, for the introduction and for the invitation to share here the experiences and the lessons learned at the CLAIRE COVID-19 Task Force.

     So let's start by introducing CLAIRE.  CLAIRE is a network of European researchers in AI and covers about 400 research groups in academia and in industry.

     In March, given the effects of the pandemics, we launched a voluntary effort to gather scientists to help in tackling the effects of the COVID-19 pandemic.  And we were able to enroll about 150 volunteers from different areas of AI and from different countries trying to ask what can we do to contribute that? 

     And similarly to the contact tracing apps, there was -- this was a period where there was a large number of expectations how science and technology can contribute and help in this process.

     And so we aimed at contributing to that.  And one of the first things that we saw is that there was many activities, a lot of information coming out, but it was not properly structured.  So we first aim at collecting information and in a structured way about databases, about tools, about initiatives, and about challenges that are -- were directed towards the COVID-19 situation and helped people to find the right information. 

     And we also had groups that look at more specific ways and development of how AI can contribute that and analysis of social media activity. 

     Throughout this period, some of the observations that we have is that there was a significant difficulty of collecting data and reliable data, label data.  And this difficulty was mainly due to heterogeneity in the privacy regulation and a lack of integrated and common pipelines for data collection and storage across health institutions across public bodies.

     So we were at the situation where there was a lot of data, but very fragmented.  And very, very difficult to put it together so that we can have a proper use of it. 

     There were some public and private initiatives on collecting data and trying to use it in certain ways.  But there were certain lack of transparency on how this data was going to -- is managed, what were the legal frameworks that applied, and also how they derived products of the use of the data will be later taken into account. 

     So it will be public use or private use and how can we deal with the usage and value generated by that data?

     Similarly to the data collection, there was a lack of available infrastructure for cloud storage and interoperable use of this sensitive data that we were collecting or that we were trying to use for the use of the artificial intelligence. 

     And also, most of the data were not labeled in a consistent way or in a reliable way.  So it is something that we have to take into account for future in technical developments on systems that can work with unlabeled data, on reliable data for basically an unlabeled world as we have right now.

     And to close in regards of recommendations, we think that it is important to take a proactive stance to build a more resilient infrastructure.  And by infrastructure, I mean a flexible and agile fast response mechanisms that allow experts from different domains to mobilize and work together and have access to resources. 

     Resources in the form of interoperable data that can be used, and this implies flexible governance mechanisms that may allow secondary use given extraordinary situations.

     And, of course, when we think about that, we also have to think about oversight mechanisms that can respond in the flexible ways when the challenges arise in this situation.  With that, I will stop, and I would be happy to discuss in the Q&A.

     >> EMANUELA GIRARDI: Thank you.  Very interesting for the fast response using AI and data.  Very interesting.  I think we can further discuss it in the second part in the mechanism and about your mind set. 

     Okay.  I see that Cathal McDermott just joined us.  Welcome, Cathal.  Cathal is Senior Counsel in Privacy and Regulatory Affairs at Microsoft, and is working particularly in Microsoft Corporate, External and Legal Affair Department. 

     So he will be -- we mentioned before with Veronica the privacy issues.  And Cathal will talk about the case studies that have been submitted by Microsoft.  Microsoft submitted three case studies.  Very interesting.

     And in particular, he will talk about the one about the privacy principles that Microsoft designed to develop and implement the COVID-19 technology. 

     So Cathal, can you please tell us which one of these principles?  And then why Microsoft, if it already had a privacy principles, decided to introduce a new privacy principles for COVID-19?

     And also if you have then some time at the end, I would like to ask you how Microsoft is in kind of a promoting this awareness creation because one of the point that I read in your case studies for individuals on how they -- I mean how their data is collected and how their data is used.

     So please, Cathal, I give you the floor.

     >> CATHAL MCDERMOTT: Thank you, Emanuela. 

     I think these are great questions.  And we have quite a, you know, substantial body of work in Microsoft around COVID. 

     I think when we initially looked back at the beginning of the year, you know, what Microsoft could do for COVID, decided that given the global scale of the pandemic, technology we understood was going to play a very critical role in almost every facet of the response to COVID-19. 

     And, you know, Microsoft was in a position to be able to leverage a lot of our technology from AI right through to other partnerships which we had with non-governmental organizations, governments, and Civil Society.

     So I think given the context around some of the work we initially moved on was that we partnered and joined with many, many States and national and local healthcare organizations and providers in addition to researchers and nonprofits and governments. 

     And we were really trying to use our technology to help develop solutions to the COVID-19 pandemic.  And this was including solutions such as coronavirus self-checker tools, which we partnered with the U.S. Centers for Disease Control and Prevention.  Helping launch coronavirus trackers on Bing, the Microsoft search engine, using AI to help decode the immune system response to COVID-19, et cetera. 

     And to get to the question here, we understood that as we use companies' technology and sensitive data to help respond to COVID-19, this was going to involve a certain level of tracking, testing, tracing to help fight the pandemic.

    And one of the most critical aspects in all of this response was always going to be how to protect and ensure people's privacy was going to continue to be protected. 

     Of course, Microsoft and other companies out there have a very deep and extensive privacy principles built into what they do already.  But we definitely thought given the novel approach that was being taken to fight coronavirus, this pandemic was new, it required a lot of new and interesting uses of technology.

     So Microsoft felt it was necessary to really reinforce and sort of double down on our privacy principles, especially as they relate to COVID and our COVID work to respond to the pandemic.

     So at the ground level, really and our belief here, we wanted to understand that people are in control of their data.  People were empowered with the information to explain how their data will be collected and used.

     We believe this was absolutely critical to the success of the technology and the COVID response.  And, of course, furthermore, on top of that, companies needed to be accountable and responsible for this data.

     We offered seven principles as really grounding ideas to consider how we could ensure that privacy was respected as we fight the pandemic.  It is interesting to look at a high level at some of these principles and how they were adopted within Microsoft and across our response.

     So, firstly, we were -- we deemed it extremely important to obtain meaningful consent and to be transparent about the reason for collecting data and what data is collected and how long it is kept.  We wanted to ensure that people understood that the data was only ever going to be used in a manner which was explained to them.  It was clear and user-friendly information services, and people were able to interact with the technology to make informed choices.

     That was at the ground level.  This was going to be the most key sort of aspect how we were responding to COVID and ensuring privacy was protected. 

     Secondly, we wanted to ensure that data was only collected for public health services and under the controlled individual and should remain under the person's control.  It was not to be used for unrelated data.  It was important that people understood where the data was going and how it was going to be used exclusively for the pandemic response.

     Thirdly, and in line with many ground line laws and privacy regulations, it was important to collect the minimal amount of data required for the response and for the solutions in question.  Really limiting it only to the specific data required for the specific time period, ensuring that it was only to be used as necessary by the public health experts. 

     We wanted also to provide clear and meaningful choices to individuals about where their data was to be stored.  You know, there has been a discussion which we wanted to ensure was moved forward here around whether the data was stored on the cloud or on the person's device and we have seen that come up in some jurisdictions also and be quite an important factor in this discussion.

     We also wanted to ensure that the appropriate safeguards, the de-identification, encryption, random identifiers, all of those in our tool box of encryption solutions were in place to help protect people's data from any harmful exposure and hacking attempts.  We were very, you know, very sort of focused on the fact that this is extremely sensitive data and everything should be done to ensure it is only used for the specific purposes in question.

     We wanted to ensure that the data was not or the health status, of course, of the person was not shared without content -- or consent, or that an excess of data was shared with any third parties.  So we didn't want thing to be shared beyond what was required for the particular solution.

     And, of course, lastly, and in line with good practice and good data hygiene, that the data was deleted as soon as the purpose was achieved.  So there was not to be any storage or unnecessary storage on the devices nor cloud.  Copies of the person's data were to be removed from any third party or public health authorities that it had been shared with. 

     And really these principles from Microsoft were designed to apply certainly across the work Microsoft was doing, but also we believe there was a very good sort of grounding for any privacy work around COVID which was happening with other companies and, indeed, the public health sphere in Civil Society. 

     That is kind of how we approached work and was the basis for our case study here to ensure privacy was front and center to our COVID work.

     >> EMANUELA GIRARDI: Okay.  Cathal, very, very interesting.  Thank you for sharing this with us.  I think the most interesting part was the one where you said you tried to empower the people, the persons, I mean to understand how their data are collected and used and stored.  This was very, very interesting.  Thank you.

     I see that we don't have at the moment any questions in the chat.  So I would like to thank Ricardo, Veronica, and Cathal.  And since we are saving a couple of minutes, I will give the word now to Michael Nelson. 

     We collected lots of very interesting input from these case studies and from the presentation of Ricardo, Cathal, and Veronica.  And I think I will ask them to stay with us.  First, I will thank them.  And then ask them to stay with us for the roundtable about the new model and new mind set about collecting and sharing data that will be moderated by Michael Nelson. 

     So please, Michael, I give you the word for the second part of the session.

     >> MICHAEL NELSON: Thank you very much.  And thanks for some very useful context. 

     I'm a technologist, I worked in business and it always starts with the product and the service, and that's why I'm so glad we have these examples to work from.

     As Wim mentioned at the start, I'm the Director of Technology and International Affairs at the Carnegie Endowment for International Peace in Washington, D.C.  It is currently 4:35 a.m. and I haven't had any coffee yet.

     But I'm very glad to be part of this.  I'm here because I have been part of the Best Practices Forum group and probably more importantly, was part of the preceding group that worked for two years on Internet of Things and big data and artificial intelligence.  And it has been a pleasure working with Wim and Titti and other members of the group. 

     But I'm also here because I provide some diversity.  As a technologist and someone who spent most of my career in the private sector and as an advocate for regulation as a last resort rather than the first reflex.  So I provide some diversity. 

     And I'm also one of only about three people I think from the Western Hemisphere on this call, since it is 2:00 in the morning in Silicon Valley.  I did see Jim Prendergast, another person that has been at lots of IGF meetings.

     But most importantly, I'm here because in the past, working at IBM, I had a midlife sabbatical and I taught at Georgetown University.  I was a Professor in the Communications Culture and Technology program.  And I learned a great deal about how to talk about technology from journalists, communication specialists, sociologists.  And that's what we're going to do today.

     I have been on the MAG in the past, and I've also been very involved in the IGF USA.  And I know that one of the biggest challenges, the problem with these sessions is that we don't have a good way to involve everybody who is part of this event, all of the audience members who might have something to say.

     And so I'm going to try something new.  The other thing I'm known for is being disruptive and trying new formats at the IGF USA.

     We have done debates, we've policy hack-a-thons.  We even did a policy slam, which is little bit like a poetry slam where people get up and they announce what they would do if they were king of the internet or queen of the internet or queen of the cloud.  And then the audience votes on which one they like best.

     So we're going to do something like that.  We can't have everybody talking at once so we're going to use the Q&A function of Zoom.  And I apologize for those who are just calling in and don't have access to that. 

     Most of you will be able to post on the Q&A section and also to rank the different things that are being posted there.  So what I'm going to do is I'm right now going to post -- let's see.

     Wait a minute.  Okay.  Can I not post questions?  This is not good.  Okay.  Let me post to the chat room.  A list of provocative statements.  And I'm also posting in the chat room in the Q&A.

     But what I have done here is posted some policy statements and buzz words that get in the way of thinking about data, artificial intelligence, machine learning, the digital society, human rights and all the rest.

     And so what I want you to do is think about new ways to think about technology.  And let me give you some examples.

     You will notice that our report, although it talks a lot about machine learning and artificial intelligence is not titled artificial intelligence.  One of the reasons for that is because there is so much confusion about what artificial intelligence is.  And there are literally 10 different definitions, and they are quite conflicting.

     It is a term that has been used for more than 60 years.  So what I'm proposing is that instead we just talk about algorithms.  AI is just a marketing label, and many of the problems are really about spreadsheets and very simple models that don't involve artificial intelligence. 

     The second term that we need to think new about is data governance.  As the report says very clearly -- and this is I think one of the most important findings -- if we don't know what we are talking about, we are not going to get good policy. 

     Data governance to a CIO means something very specific.  It means how do I manage my corporate data.  Unfortunately, in a lot of rooms here at IGF, people have said data governance and they are really referring to data policy, what governments are doing to control the flow of data and how it is used.  And to say it is all one thing is misleading.  I urge us to separate those two problems.  What do individuals do, what do governments do, and use different terms.

     The third term that is really confusing a lot of people is ethical artificial intelligence.  Technology is not ethical; people are ethical.  So let's talk about trusted uses of big data and AI.  Similarly, let's not talk about AI ethics.  We can't even agree on whose ethical system to use.  Let's talk about digital human rights.  We do know what human rights are.  We do have UN documents that define human rights.

     Another very, very confusing phrase is my data.  And we hear people talking about how I need to control my data.  Well, as we say in the report, you got to talk about what types of data you are referring to.  So thank you for putting that forward.  So on -- hang on a second.

     So when you say my data, we should be talking about the stuff that I post on the internet, personally identifiable information that I have provided in forms and then everything else.  And there is a lot of everything else that really isn't my data.  It might be a picture of my car as I drive down the road.  Or it might be a picture of me at a restaurant with a friend that somebody else took.  That's not my data, somebody else took the picture.  Okay.  Now let me get to the three most important and most misunderstood ideas.

     This is one that The Economist put on its front cover.  Data is the new oil.  This is the most destructive term and phrase that we have in this whole debate because it makes countries and governments think that data is something they need to hoard, that it is a strategic asset that somehow they need to track where it flows.

     Data is not oil.  It's shareable.  It's not consumable.  You don't burn it.  If I have my data, I can copy it and give it to you.  You can't do that with oil.  But the idea that data is the new oil is leading to the next problem which is digital sovereignty and data sovereignty; that countries need to keep their data inside their borders, they have to have total control over it just as they want to control their oil supply.

     Milton Mueller made the case very, very effectively a few days ago that we should talk about personal sovereignty.  Let's give each of us the ability to know where our data is and to know what is protected. 

     That requires a number of things.  One thing that is really important is that we need to think about better encryption, better quality mechanisms, better ways to know where my data is and who has had access to it.  For my money, that's the most important part of the global data protection regulation.  The part about transparency, knowing where my data is. 

     Last one on the list that you are seeing is about cyberspace.  This phrase was initially used by William Gibson to describe what he said was a mass hallucination of people linked together.  Then UNESCO talked about this space where ideas flowed freely.

     But then the U.S. Defense Department started talking about cyberspace as it if was simply a bunch of computers and networks linked together.  And they started talking about cyberspace as the new battlefield. 

     Cyberspace is not spatial.  The internet is not in one place, and it is not static.  It is constantly moving as people move and ideas flow.  The idea that we can somehow have cyberspace and draw boundaries around it is really causing problems.  And again, the phrase itself, cyberspace, is leading to a lot of confusion.

     So this is a hard one, but we have to find a better term.  I'm -- I think cyber civilization.  Cloud of things.  The Japanese talk about Society 5.0.

     The last issue I would put out here is also on the chat, and that is that we should stop talking about data brokers, people who just gather all of the data together and combine hundreds of different datasets to get an incredibly accurate portfolio.  They do that because they need all the data in one place so they can run algorithms against it.

     There is a new idea, and one of the champions is Sandy Petland at MIT, and that is data unions.  These are third parties who take your data and store it just like a bank stores your money.  And what that enables is people to run algorithms against data in hundreds of different places.  And in each case, as the data is accessed with a particular algorithm, the algorithm will have to get permission, it will be very clear what data it needs to use and what data and answers it is retrieving.

     I mean it might be a very simple thing like is Mike -- who in this group of people in Washington, D.C. is over the age of 18?  They don't need to know 1700 things about me to find that out.  So this is the kind of new thinking we need, new terminology.

     So what I would like you to do is if you can, get on to the Q&A.  Start putting out ideas.  Put out ideas on better terms.  How would we replace the idea of a digital ecosystem?  That is a great idea.

     Either in the chat or in the Q&A, put your suggestions out there.  And let's have a discussion by the chat and by the Q&A.  And at the same time, I would also ask my fellow panelists if they have any better ways of thinking about it? 

     When I was at Microsoft, we spent a lot of time thinking about ways to convey the idea of the cloud and the Internet of Things that weren't so techy and connected to people in a powerful way.  Sorry I talked so much.  Clearly, we have a lot of mind sets to fix.

     So let's get some ideas out here.  I just read a few that are here already.  Digital self-determination instead of sovereignty.  There was a great session on that earlier in the week.  Jorge mentions digital ecosystem instead of cyberspace. 

     The one challenge I would pose to people is Nelson's law of buzzwords.  You only get five syllables.  So digital ecosystem is seven.

     Still great.  Maybe it is just e-cosystem, I don't know.  So let's think creatively and concisely.  You get five syllables for the buzzwords and you get eight words to explain it.  That's really the way Washington works, and unfortunately more and more countries seem to think in Tweets and sound bites and not in policy analysis.

     We have to find a way to make the issues new.  Any panelists want to weigh in?  Do you have any good words, any other good ways of saying cyberspace, data sovereignty, ethical AI?  My data?  Any other terms that you propose?

     >> RICARDO CHAVARRIAGA: Now before launching into a new term, I just want to say it seems that we try to make ways to make a parallel with existing things. 

     When we think of cyberspace, we then focus onto a physical space.  When we think data as being new oil, we then imagine this as tokens, things that can be sold or exchanged.  But data is something different. 

     So one of the things that I think that we need to imagine is how to redefine.  The public is fearing the digital age.  The example is the picture that is taken of me, there is certain aspects of the information that are personal, but they are not mine. 

     Also, if I think about my data, we have to imagine how does this data end up being of someone else when it is being transformed, when it's processed by an algorithm.

     So I think before thinking about new spaces, we have to redefine what is this new public space for the digital age.  Because that is what will set the boundaries of what is my data in which I need to have agency, determination, and sovereignty.  And then on the other side of the boundary, other rules would apply.

     So how do you coin a term on that, that's a tricky one.  But I guess if we continue talking about cyberspace which gets often conflated with cybersecurity, we are going in the wrong space. 

     And I will focus on how do we define the public sphere and from then on we can identify how systems can be respectful of my human rights and my privacy and the things I should be autonomous and agent on.

     >> MICHAEL NELSON: The other dangerous and crazy phrase is data is the new plutonium, which is mostly used by privacy advocates.  It makes one very powerful statement, which is that data used the wrong way can cause a big mess.  And if you get too much in one place, it can get a really big mess. 

     But, again, it implies that we can keep track of every grain of data and that we need a Nuclear Arms Control Treaty to control where it goes.  I mean it is interesting to read these things. 

     I worked with Senator Gore back in the late 1980's and early 1990's.  And he was very interested in metaphors and was famous for talking about the information superhighway back when nobody knew what the internet was.  And that was very effective because he made it clear that we have this fundamental infrastructure that is going to connect our people. 

     But some people took it one step further and said well, have the federal government pay for all the superhighway, let's have the government pay for all these networks that are going to reach everybody.

     And we had to come back and say no, no, the government paid for just backbone, just the core of the highway, and everybody else had to pay for the roads and driveways and the parking lots.

     But it was -- every time you come up with a short way of saying something, you have unintended consequences.  But right now we're in a world of mass confusion because people are taking all the old analogies and just applying it. 

     Other thoughts from our panelists on how to talk differently?  How to get that killer phrase, digital divide?  Net neutrality?  What is the phrase that we need that is going to get people to rethink the data landscape?

     >> EMANUELA GIRARDI: An interesting concept I think is the one that has been promoted and developed Luciano Floridi. And he and created this concept of infosphere. 

     So basically started from the biosphere and created the infosphere that is kind of digital, but at the same time real world that it includes the cyberspace and internet and digital communication and also the classical mass media basically.

     So I think that thinking about a new -- kind of a new virtual but at the same time real world like this infosphere is -- I mean it will help us creating also new mind set.  So this is an interesting concept to me.

     >> MICHAEL NELSON: And what was the name again?  Maybe you can post it in the chat.

     >> EMANUELA GIRARDI:  Luciano Floridi.  He' a philosopher, he's the Director of the AI and Data Analytics Laboratory at Oxford University.

     >> MICHAEL NELSON: Okay, you might post his name in the chat.

     >> EMANUELA GIRARDI: Sure, definitely.

     >> MICHAEL NELSON: Other thoughts or things that people are -- I really like some of the things that Jorge is saying here about the digital ecosystem or data ecosystem.

     >> VERONICA ARROYO: On my point about -- what I believe, it is quite interesting this whether you propose to analyze how we frame how we -- the concept that we use and the phrases that we use for every -- in every aspect, right? 

     Some of them I completely agree with them.  I, for example, instead of using ethics or AI ethics, move to digital human rights, it is more concrete words that we can use and enforceable words as well. 

     There are others that maybe, you know, because -- at least me, myself, I'm also an advocate and activist and sometimes we obviously use those words, those big words or trying to, as you mentioned, like the plutonium or some of -- or data is the new oil just to catch attention.

     It's important, but it's not the only thing we should focus on.  If you catch attention from people using catchy phrases or ideas or analogies, sometimes you miss the opportunity to explain what you are talking about.  And we are living at least in a world where we need to ask advocates or ask people who are very involved in this to be as clear as possible about the things that we are doing and not let the spaces for misconceptions.

     That is why at least from our perspective it is good to have all of the concepts clear and all of the informations clear.  And I think that the exercise is interesting to make us think and reopen our minds of what words we are using today.

     But I will also put more attention on how we explain this.  So for us, what means, for example, data sovereignty, which is a big word and which is something quite interesting to put attention on right now on what governments are doing with the data sovereignty and why they are using this word and what this means exactly for them and for us as citizens.

     I think we can listen and see what others think about this.  I'm quite open to continue the discussion.

     >> MICHAEL NELSON: I have worked with governments around the world and so often the policy maker is dealing with 50 different issues.  People are coming in to talk about corn crop, weather, who they need to hire for a new position, and digital sovereignty.

     And they are desperate to have some analogy.  And that is why we have got to pick the right one.  I'm particularly concerned that people are so focused on the Internet of Things that they are not focusing on the data and where it goes. 

     I always talk about the cloud of things, which is also only three syllables because it gets people to think of the whole system and how we can design the cloud in a way that would provide better security.  If someone says Internet of Things security, or Internet of Things privacy, the first impulse is to secure the things and to regulate hundreds of billions of things.

     Well, let's regulate the data.  Let's understand where the data from the thing is going and not try to fix the problem by making sure every single five cent device has 17 different privacy functions.

     Cathal, I will put you on the spot.  Microsoft has been such a leader in this space, and you have such a large branding and marketing team.

     What is the phrase that we are missing here?  And I know you are lawyer and not a communications person, but you know how important words are because you're a lawyer.

     >> CATHAL MCDERMOTT: Yes.  And thanks, Michael.  I mean it's a great discussion really to I guess figure out how to land messaging. 

     And I think that is the real piece which is at the heart of privacy right now.  It is ensuring that the messaging which is delivered to the customer and to the folks whose data is being used, that it is clear and intelligible. 

     I'm not sure whether that is best served through the buzzwords or short phrases.  As you well represented, I mean these lose nuance but there is a tendency always to focus on something which is fantastic for a pickup in media or to get to the front of the news cycle.

     But I think that always comes with a loss of certain meaning.  And I think that meaning is the space for the messages really communicated to the user.  And I think really I mean privacy especially needs to be communicated in a manner which is clear and is meaningful to the audience, right?  There is not a one size fits all.

     If we look at, for example, constituency with children's privacy, I mean that is quite a different understanding and communication than would to, let's say, a seasoned technical expert.  And I think companies are really understanding, you know, these differences now and the shift.  That the same sort of carbon copy of messaging doesn't work nor land in the same way.

     And I think just understanding that is the piece which is really going to make the most sense here.  And whether that is done best with a short terse statement which can also grab some news cycle, I'm not quite sure.

     We are better looking as it's understandable, sort of lack of legalese.  So I think I have a slightly different take here.  But simplicity is key here, and intelligibility.

     >> MICHAEL NELSON: So what do you do when they post we must have data sovereignty?  Probably will be in French or German. 

     But what would you do?  What would you tell your CEO to respond if for some reason he wanted to respond to that tweet?  Or if Brad Smith were to respond, what would be the phrasing and term?

     Because I mean sovereignty has always been part of running countries, right?  How can you be against sovereignty?  But how would Microsoft respond to a tweet like that?

     >> CATHAL MCDERMOTT: I think when we look at the sovereignty issue, that is something which is clearly becoming more important across the globe and occupying more news space.  And we have and do see those types of messages coming out from countries more and more at the moment.

     I think that, once again, I'm not quite sure how we would respond with a tweet or even if a tweet would be the best way to respond to in this kind of debate.

     I think as there are a number of ways to try and influence and move and educate around these debates.  And that comes from, you know, a number of factions within any company's sort of toolbox here to engage.  We would, of course, be engaging on a policy level, but also via just our educational streams to help educate our users and Microsoft's audience.

     And it is multifaceted.  These are difficult discussions, and you know --

     >> MICHAEL NELSON: Sorry to put you on the spot.  That was unplanned.  This has not been rehearsed. 

     But Ricardo and Emanuela and Veronica, Wim, how would you respond if the head of a country says we must have data sovereignty and we must have it now?

     >> RICARDO CHAVARRIAGA: If I backtrack a little bit on what we are trying to do here is coming to this sound bite to open the door to the discussion where it really nails. 

     Because we started from a set of different misnomers like cyber security and AI ethics and data governance that all address specific issues.  And this is where the clarity that Veronica was mentioning is needed.

     And but and then we also need this sound bite that says this is what all of these issues entailed.

     So if we really want to get to the sound bite, we need to agree on what is this thing that we want to get or the scenario we want to avoid.  If we want to push, say, one example you want to push for sovereignty as individual or organizational or state level, we can imagine digital autonomy.  If we think about, say, flourishing, we can imagine talking about digital empowerment or digital enlightenment.

     If we want to talk about protection, then we may talk about digital governance or digital regulation.  But then what is this thing that we want to avoid and what are these misconceptions that we want to clarify in terms of data as a commodity?  The digital public sphere?  What is this first point that we need to redefine and we need to get societies and organizations to agree upon?

     And based on that, we can say all of these aspects they point out to this -- someone say the humane approach to digital technologies.  Because data is not the only thing, we may have other things that generate data but have issues on their own.  And then we may say this is not about data but about digital and digital communities or digital ecosystems or digital spaces. 

     Or we may say we really want to avoid unforeseen events and then we really put all the money into regulation and oversight.  But then what is this thing that we want to avoid?  That would be my provocative question.  The challenge is that when we ask these questions we end up with thousands.

     So what is this line that put all of these issues together?  And if we identify that, we can start with an explanation that takes a couple of pages but that will be the best way to later narrow it down to this five syllables that allows us to talk with the Twitter oriented leaders.

     >> MICHAEL NELSON: What you just said is incredibly important.  You have to have the hub, the focus, seed that is going to be the one idea that everything else kind of grows from.

     It may be what you just said.  Digital autonomy or, you know, data agency or personal sovereignty.  You know, the idea that I have the ability to control my data.  A little bit like hate speech application contents.  Governments that want to filter the internet and censor the internet, they don't want to give their people the flexibility to make their own choices and to pick their own filters.  And that is the personal censorship function is what a lot of platforms try to so that we don't have this pressure on governments to censor the internet. 

     But again, it's how do we control data?  Well, give users as much control as possible about data about them.  Not my data, but data that might be about me.  I call it two-way transparency.  I will give away data, but you're going to tell me what you are doing with it.  Or mutually assured disclosure is another buzz word, but that sounds a little bit too much like nuclear war and plutonium. 

     Titti or anybody else want to weigh in on this?  I know we are running out of time for coming up with the idea.

     >> CONCETTINA CASSA:  Evelyne's comment, what was written in the chat that she does not read about comparing AI ethics and digital human rights.  Maybe you can say a few comments on this.  I don't know.

     >> MICHAEL NELSON: Can we promote Evelyne? 

     I mean I do agree with her quite strongly that AI ethics is this term that means everything. 

     I have been in three-day conferences where we've talked about everything and at the end people were more confused. 

     The moment I remember was 25 years ago at the NW UNESCO meeting, I stood up and I said we've talked for three days, has anybody defined info ethics?  And the chairman said well, you just weren't listening.  And then a theologian stood up and said no, Dr. Nelson is right, we have not determined whether it is contian ethics or Hegelian ethics, or Confucian ethics. And then we had a 25-minute conversation about that.

     >> CONCETTINA CASSA: Let's see maybe Evelyne, maybe you can just intervene the discussion, if you like.

     >> MICHAEL NELSON: And Jorge has said a number of very important things.

     >> CONCETTINA CASSA: And he can intervene and share his comments, please.

     >> MICHAEL NELSON:  Unfortunately, in the webinar format we have to promote people to panelists. 

     SO we have 20 minutes left.  Perhaps it's time to go to the last section and talk about how this report can do more to impact this debate.  You know, what is missing from the report?  Oh, Emanuela?

     >> EMANUELA GIRARDI: Sorry, I just wanted to add something before your conclusion. 

     I think building up from what Ricardo said, to me one of the most important thing is to agree on anthology. 

     So really to agree on what we mean when we say like data that have to benefit and not to harm people.  So these basic things, I mean they have to be globally agreed.  And then from that probably we can start building like the vision or what -- global vision probably that includes the governance of data.

     But the basic thing is really that everybody agrees on the kind of data anthology.  I think this is one of the key requirements.

     >> MICHAEL NELSON: Well, one of the nice things about this project, it has brought people like Microsoft into the discussion. 

     We had some great comments earlier in the week by Fujitsu, has their own five principles for responsible use of data and artificial intelligence.  A lot of overlap, a lot of best practices to choose from.  And I think highlighting some of those in our final report would be very helpful.

     So if anybody has other pointers, other sets of principles that we should look at.  You know, we all know about the OECD principles and the European Union has come forward with some very interesting ideas on what needs to be done in the area of artificial intelligence in particular.  Let's keep sharing.

     We do have an opportunity now to spend the last 20 minutes talking about what is next and how to get more ideas and more buzz words and more tweets to shape the debate in the right way.

     Thank you very much, and thanks for trying this experiment.  It didn't quite work the way I had hoped because not as many people up voted.

     >> CONCETTINA CASSA:  Evelyne has been promoted, and then we will try to reply to your question in the chat.

     >> MICHAEL NELSON: Can we promote Jorge as well?

     >> CONCETTINA CASSA:  Evelyne, I think she can intervene now.

     >> EVELYNE TAUCHNITZ: Can you hear me now?  Hello, everybody. 

     I was just, my comment about ethics and human rights, I cannot agree really on that.  Because while I'm working in both in human right and social ethics.  And just to maybe specify that and also highlight the importance of ethics and also values, I think. 

     Like ethics really is about the study or science of different moral norms.  That presumes that there are lots of moral norms out there.  Some of them legally binding, for example, human rights.  But lots of them also not. 

     I think the concern of ethics is not just only to kind of say or judge what is right or wrong behaviors, but it's comparing these different norms.  There might be situations where it is ethically correct to violate human rights, or you could argue that way.  And I think it is very important when we talk about data and when we talk about new technologies and the internet in general, to also try to find agreement on what values we want to base these technologies.

     Like in the sense of what purposes should they serve?  And once we can find an agreement on that, we should set the boundaries of the technologies accordingly. 

     Like for what purposes are data support -- data supposed to be collected, for instance?  For what purpose it is not?  Did users agree to them?  Did they not agree to them? 

     And to the last governance question, which we didn't talk about so much, in which forum can you discuss these questions?  And where can you find such agreement?  And what political legitimacy also is there in a sense?

     And what actors should be responsible really talking about the issues.

     >> MICHAEL NELSON: I'm glad you could bring this in.  Because I didn't want people to think I don't care about ethics.  But I'm just being practical in saying we have a base to build on when it comes to human rights. 

     You go to different countries and say I think it is -- you know, we have done comparisons.  You go around the world and you ask people ethical questions and ask them to make values-based choices.  And it is amazing how different the answers are.

     When I worked at IBM, they did a survey every year of the new employees.  And they had data for more than 300,000 -- actually half a million, 500,000 different previous and current employees of IBM. 

     And the differences between countries where people put the society and the community first and countries like the U.S. where we are crazy about individualism.  I mean the U.S. is very unusual that way.

     So it is much harder to get this agreement on ethics.  I think values is a better term because when you talk to technologists and tell them we're going to talk about ethics, they look blankly and think well, I didn't take a philosophy class in college, you know. 

     But you talk about values, that makes them more at ease and more comfortable.  But if you say human rights, they at least know how to look that up and find what that means.

     >> EVELYNE TAUCHNITZ: I think like maybe a common concept is also norms.  Because there we can talk about moral norms, legal norms, social norms.  And that's really like something people can relate to as well.

     And I would specify we should talk about human rights norms.  In the sense we have the advantage, I completely agree that they are already internationally agreed upon.  I mean that is something you can build on if you want to find some kind of agreement.

     >> MICHAEL NELSON: Let's turn over to Jorge.  And then Titti, who can take us to the end.  Jorge, you've had some great comments in the chat, and I really wanted to bring you into the conversation.

     >> JORGE CANCIO: Thank you so much, Michael.

     >> MICHAEL NELSON: Introduce yourself because not everybody watched your earlier session.

     >> JORGE CANCIO: This is Jorge Cancio.  So thank you for putting me on spot.  You see I am wearing my business attire so I'm completely ready for this invention. 

     I will pick up what I said on digital ecosystem.  I think that we have been focused on geographic, geometric analogies, metaphors for the last 20 years or more.  And this has many limitations and many implications which remain. 

     And also it directs our imagination to something that can be governed to something that can be taken, to something that is passive, and that can be dominated by us humans, I think.

     And if you take this idea of digital ecosystem, it is an interactive space and interdependent space to take the terminology of the panel on digital cooperation and combines natural and social relations between individuals and communities.

     And this also gives space to the idea which we are exploring of digital self-determination for the individual but also for the communities.  Ecosystem also brings with it the idea of systemic relationships, of dynamic relationships.  Ecosystems plural, and there are no one size fits all. 

     There are differences in the ecosystems.  And I think especially in this present and future where we are entering a time where digital is pervasive and where we are really becoming hybrid beings between our digital life and our -- and normal or physical life, it is something that you cannot really distinguish any more.  And this will evolve and be increased in the next years. 

     In this notion of ecosystem is much richer and also links up to something we have been seeing during this IGF, the mega trend of digitization is absolutely and intimately connected to the environment and to climate change.  And so I think it's a great idea.

     >> MICHAEL NELSON: Thanks for being part of the discussion. 

     One final thought.  I'm going to continue to monitor the Q&A, and I hope there will be other ideas and people will up vote the greatest ideas.

     So many of the terms we use are so negative.  And sometimes that is by design.  But artificial intelligence.  We don't want artificial food.  Big tech.  That brings ideas of big government and big brother. 

     Again, there's so many terms that have been designed in some cases to cause a negative reaction.  We have got to -- and sometimes because of the movies and science fiction terms that were positive got turned into something like killer robots and Skynet. 

     So we have to think new ideas partly to get rid of the old confusion and the old connotations.  Anyway, thanks for a great discussion. 

     We are going to incorporate a lot of these terms.  And if we can find a way to say digital ecosystem in five syllables, I think we have a winner.  Titti, do you want to wrap it up.

     >> CONCETTINA CASSA: This question that referred to the discussion of the data technology in an internet governance concept.  What should be the next steps?

     We will try to share how we can address these issues and where and by who should be discussed, and what is the role of the BPF inside the context of the IGF itself. 

     Because, as you know, the UN Secretary General issued the rules on digital cooperation which identifies priority or asks the digital corporation. 

     Also, at the same time we have the paper or recommendation that also assign to identify concrete ideas on how to move forward.  And then we have also the MAG, the working group of strength and strategy by the MAG that does also gives useful suggestion the way to go forward on this context.

     Then the question is where and how the BPF should be in this new context.  As you know, there are several idea on this because some are sharing the idea that BPF could be used as politic incubator to incubate policy and law for public discussion. 

     And some say you have the role of cooperation accelerator to try to focus on some issue or try to bring a range group of institutions and process on these issues. 

     I want just to share with you and ask you some idea what do you think?  What should be the role of BPF in the new scenario that is the IGF plus.  So please share your idea in the chat and also taking the floor and try to see what we think about this. 

     I don't know if Anriette can say something on this.  Wim, there are several idea.  Do you think? 

     Should the BPF be used as -- I think both functions I mean it is important to share it and understand what BPF should lead the discussion in the future in the next scenario of the IGF plus?

     >> EVELYNE TAUCHNITZ: Can I say something about that?

     What I like with BPF, I mean it's best practices in a sense.  Like best ideas and best ways of doing things. 

     And I think there is a huge innovative and creative potential which should be used for policy making. 

     And actually, personal feeling I have within the IGF, I mean it is Internet Governance Forum.  The governance aspect I think is not so intensively discussed as it could be in a sense.

     I think it would be good to really kind of focus on creative or innovative ways of governance in a sense of policy incubator as you just mentioned.  This is my like spontaneous reaction to it that in a sense that I think it would be really great if it could develop this way.

     >> CONCETTINA CASSA: Thanks a lot, Evelyne.

     >> RICARDO CHAVARRIAGA: I'm not very familiar with these activities, my first intervention. 

     But one thing that I'm somehow afraid is there are so many discussions and so many initiatives that appear at the same time that we may have the issue of too many cooks or that we don't know exactly where to pay attention to.

     And I think it would be important to see how the interaction with other initiatives developing governance mechanisms and standards and recommendations can be cross fertilized with interaction with this forum.

     And I liked a lot the previous comment about having a policy incubator for new forms of governance because this is something that is becoming a recurrent theme when discussing emerging technologies in general that we see the current approaches seem unfit.  But we don't really know yet what we need to replace them with. 

     So I think having a policy incubator that has a good interaction and active dialogue with the organizations developing standards interacting with governance bodies, I think would be a great, great way to put forward the discussions that you are holding here.

     >> CONCETTINA CASSA: Thanks a lot, Ricardo.  Any other suggestions or comments on this?  Please share your ideas.

     >> ANRIETTE ESTERHUYSEN: Thanks so much for having this session.  I think what is valuable about a session like this is that it actually really brings together diverse ideas and perspectives on the problem. 

     And in a way it feels to me -- and I know some of you that are working in the BPF also feel that as well -- that this shouldn't be the end of the BPF cycle.  In a way, it feels to me as if we are just beginning in some ways to come up with really creative solutions.

     So I think we should look at how we do BPFs in the IGF and how much time we give them.  How many opportunities we give people to come together in this kind of format.  And I was just talking to Wim, maybe we need an intersessional event where the BPF or all of the intersessional work of the BPF comes together to have this kind of session. 

     I think we need to continue and give the topic time, and I think we shouldn't be over ambitious about finding so-called best practices.  I think that in a way becomes quite intimidating.  It's useful, but maybe that can also limit the collaboration that emerges from the BPF process. 

     But I just invite all of you to come to the session later today.  We'll be looking at the future of BPFs.  Because it is the moment now, we need to look at how we can strengthen and involve this modality.  Thanks, Titti.

     >> CONCETTINA CASSA: Thanks a lot, Andriette.  Any other comments or suggestions or ideas?

     >> MICHAEL NELSON: This is Michael.  If I can just add to what Andriette said. 

     Best practices are great, but when I was in government and business, I found that worse practices and worst practices, particularly if you didn't understand why they didn't work, were often more useful.  But Nobody wants to talk about failure, so I hope we'll do more about that.

     >> EVELYNE TAUCHNITZ: I think best practices don't have to be real necessarily.  I think we can also analyze the worst cases and then think how we should have done it better. 

     I don't think that we always need these practical case studies in order to talk about best practices.  But in a sense, we can also discuss what could be done better without the real cases.  That is how policies are usually designed and drafted like just thinking how should we do things.  How can we do them better for the future.

     >> CONCETTINA CASSA: Thanks a lot.  Anything else?  Veronica, did you want to add something?

     >> VERONICA ARROYO: I think what is important, and I agree with some of the people who are writing on the chat is that we can also -- I agree that this is not -- this should not be the end of the work that the BPF does. 

     I think as people who we want to continue working on this, it is important for us.  It's important also to include cases from other parts.  I agree with Evelyne saying that we can start working on cases and analyze those cases from a mutual point and from that create our best practices.  I think we really are here coming and hearing from different perspectives, that is really important.

     So I will keep more diversity and more work we can do, I think.  There is a lot of people really interested on that, and I think that is really great.

     >> CONCETTINA CASSA: Okay.  Thanks a lot, Veronica.  We have a few minutes left.  I will take the floor to Wim and you can add a few words. 

     Wim?  Because we are out of time so I don't think we can accept other comments.  If that is correct, Wim?

     >> WIM DEGEZELLE: We are already one minute over time.  I would like to say two things. 

     Coming back to the discussion I was having with Andriette offline, I had the same feeling as I had last year at the IGF that this is a great starting point for a BPF discussion. 

     And sometimes it feels that final -- I feel a bit sorry this could have been a wonderful discussion to start developing the dialogue throughout the year.  So we should be looking into a way on how to enter this and build on the discussion we had today.

     And the session Andriette also pointed at to discuss the future of IGFs and also follow up on this topic is later today at 1:30 UTC.  You can see that in the schedule.  The title is Preparing IGFs for the Next Cycle and Following Cycle.

     So that is 1:30 UTC.  So I would like suggest that this discussion be continued there.

     And then for this year's BPF, I will just point to links in the chat.  The idea is that the BPF document itself is finalized shortly after the IGF

     So we would welcome additional comments and feedback on this discussion and the report either by e-mail or please don't hesitate to subscribe to the mailing list and continue the discussion we had today over there. 

     So then I would like to thank all for participating to the session and I hope to see many of you later on at the discussion at 1:30. And thank you to the panelists and the facilitators.

     >> CONCETTINA CASSA: Okay.  Thanks a lot.  Bye-bye.