IGF 2019 – Day 0 – Raum III – Pre-Event #13 Open and Free and What - Visions for the Future of the Internet

The following are the outputs of the real-time captioning taken during the Fourteenth Annual Meeting of the Internet Governance Forum (IGF) in Berlin, Germany, from 25 to 29 November 2019. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 



>> MODERATOR: Ladies and gentlemen, welcome to our panel discussion on the Future of the Internet. I'm head of the department Global Order who organizes this panel.

First of all, let me say that we are specially honored to welcome the United Nations under Secretary General Fabrizio Hochschild. Thank you for being with us.

Eduardo Magrani, the moderator of this event, will introduce the other panelists in just a moment. Before we start, let me remind you how much has changed regarding the perception of the Internet in less than, yeah, 10 years.

In December 2010, the so‑called Arab Spring started. A lot of people claimed that the Internet and social media played a crucial role in making it happen. Also, during that time, when people talked about China and the Internet, they jokingly asked, Well, what are the Chinese going to do? Put a million people in school gymnasiums and let them control the Internet? Well, as it turned out, China put even more people to the task and, as of right now, is using the Internet pretty successfully, I might add, to observe and control more than 1 billion people. So this is one of many reasons why it's crucial for the initiative to talk about the future of the Internet, an Internet that's not only enabling economic growth but also strengthens open societies.

That is why it's important for us to bring together actors and stakeholders worldwide who advocate for a free and open Internet. That is why we're pleased to provide an opportunity for this panel of distinguished experts to discuss the future of the Internet.

I wish you all a stimulating discussion and ask you, Eduardo, to take it from here.

Thank you very much.

( Applause )

>> EDUARDO MAGRANI: Thank you very much. Thank you, everyone. Good afternoon. Thank you for coming around to our panel. I have a dream team here with me in this panel. I'm Eduardo Magrani, fellow at (?) Foundation, professor of digital rights and intellectual property in Brazil.

Let me introduce very briefly the speakers we have.

Mr. Fabrizio Hochschild, representing the international U.N. level under Secretary General and U.N. Special Advisor to the Secretary General.

Ms. Miranda Sissons, representing the private sector. She's currently the Director of Human Rights Policy and Engagement At Facebook.

(?), representing academia, Senior Fellow at the Center of Excellence for National Security at RSIS.

Mr. (?) Bagiire, representing government, Permanent Secretary, Administer of ICT and National Guidance.

Mr. Carlos Affonso, representing Civil Society. He's currently a director of ITS real, a member of the natural (?) Centers.

And Mr. Carl Buhr, representing the EU level ‑‑ EU region, member of the European Commission, Deputy Head of the Cabinet, Commissioner of (?) Gabriel.

So thank you very much.

The motive to compose this panel is to discuss the future of the Internet. So it's now or never. This panel will be divided in three different rounds. The first round, I would like to address which are the guiding principles, the guiding values for each stakeholder since we have many of them represented here in this panel because we think that we might have a shared vision for the future of the Internet, but if you think that we do not have so far an open and free, imagine the what from now on.

I would like to ask each stakeholder, what is the what for each one of them. So we chart the guiding values. In this first round, I will have the same question for all of them. This round will have 20 minutes. Only three minutes per speaker, if you're fine with that. So very quick replies.

On the second round, also with 20 minutes, I will address specific questions, trying to gather what are the specific challenges of each stakeholder and, mainly, what are the challenges they are facing to fulfill this vision that they stated on the first round. So also a self‑evaluation, self‑reflection on what they are doing towards the fulfillment of this vision.

On the third round, we're have 30 minutes of reactions from the panelists, the speakers, and also from you, the audience. So if you want to make any questions, feel free to note it down. I will come over with a microphone in the end of our session. We'll have half an hour for that. With that said, I would like to pass the mic to Mr. Fabrizio Hochschild to discuss this among different stakeholders.

>> FABRIZIO HOCHSCHILD: Thank you. I think the official U.N. position is values and norms that have been agreed on for the analog world apply equally in the digital world. So human rights, for example, have to be upheld in as much in the digital realm as in the analog world.

I think one would be safe. I think, you know, the right to privacy is challenged in the digital era and is likely to become more challenged with 5G technologies.

A third would be, I would say, do no harm. That goes beyond human rights. Harm does happen, unintentionally and sometimes intentionally, through the Internet. We've unleashed forces that I don't think we fully understand quite how to get back in the box again. A fourth would be both universal and truly global and one that is sophisticated enough to be sensitive to regional cultures and different traditions. Again, I don't think we've got our head around that challenge of upholding universality and sensitivity to different regional or community approaches.

And, finally, I would add upholding the Internet as truly a public good, something that can indeed help commerce, help economic growth, but ultimately is a global public good.

>> So the question about what value I would ascribe to the Internet and the future, when I thought about this, I was thinking maybe broadly about, you know, aspirational values. This is basic because I have a background as a media educator. I remember when I started using the Internet, and it was quite a while ago, about 20 years, most of us in the room. It is a medium of communication. Right? It is in and of itself agnostic. It's a platform, like this stage is a platform. You can't describe a platform as having good or evil. It's actually the actors. Some are very good. Some could be better, and some really shouldn't act anymore. We all play on our roles, and the actions of these cannot be confused with the Internet and of itself.

Maybe I'm divulging in semantics but I had to make that clear for myself when we were talking about the Internet. We have to ascribe to values in line with the functionality. Are we talk the systemic values. So my "what" would be secure. I already like it to be secure. I suppose it's in line with the work I do as well. Is it secure in terms of infrastructure. Are we able to trust it as a reliable media. When you think about trust, you think of consistency, durability, dependability. We use this to increase our lives, to become digital. Now, if we want to keep our eyes on the prize, the prospects of digitalization and open and free Internet. What is not lost on me is we're talking about open and free Internet, and we're at the Internet Governance Forum. That's a bit ironical for me.

This is something that's going to be an ongoing challenge. So do we want to speak about the actors? I go back to the analogy I use. Is this about the behaviors as well? I don't know.

One of the cases where users confuse the platform for the Internet itself, and there are situations like that. Maybe I'm asking more questions than answering them. That's my "what."

>> I'm working for the European Commissioner for the Digital Economy and Society. There in a nutshell, you have the basics of what we approach. You mentioned open and free Internet. We're an open society. It's natural for us to want the Internet to be open. If I want to add to that additional adjective, an important one that describes what we did over the last few years was citizen centric, the citizen is the subject of the work and not just the players. We know there are large players, be they states and large companies. They have an effect and impact. That reflects the regulatory work that we do. There's an understanding that the individual is what counts here. You can replicate that through security policies. I would subscribe to what you just said about security when it comes to data protection, for example, when it comes to control over your own data online, all of that is very important, and I just mention a few things we did in this space that reflects these values that I just mentioned on the cybersecurity side, of course, but also when it comes to electronic identity. When it comes to personal data protection, a strong role for the individual, vis‑a‑vis, a platform regulation, a new one since last year. It's focused on the power and the user platforms, companies and smaller users.

Mentioning another hot topic, artificial intelligence. We have an initiative out there which basically brings it all together with the AI and decision making, the rights to privacy and data protection and transparency and democracy, all of that is at the base of the work we're doing. I think we'll have occasion later on to visit one or the other of these topics once again.

>> EDUARDO MAGRANI: Mr. Bagiire?

>> VINCENT BAGIIRE: Thank you. I work for the government of Uganda. Vincent Bagiire. I will try to bring in the government perspective, but restricting myself to Uganda and not Africa because I'm not qualified to do that.

For me, when you talk about the Internet, it's how to reconcile the contradictions. Reconciliation of the contradictions from the perspective of what applies to the analog setting should apply to the Internet. What the official has referred to.

And, indeed, if you look at the space, there's regulation. You regulate the mainstream media, but there are challenges as far as citizen generalism is concerned and, indeed, when you take action ‑‑ because that's where the abuse of the Internet comes ‑‑ manipulating identities, impostering as you know, people impersonating. And when governments such as the one I work for come in, you're fighting freedom of expression. This person whose freedom of expression we should be respecting is not them ‑‑ I come from politics. I was a politician for five years in the parliament of Uganda. There was six Facebook pages that didn't belong to me. All that time I made a complaint in the right forum to see that these are put down. Nothing was done until after I had left politics.

So when you try to interrogate, we found out who had set up the pages. Now, that's where I ask the question of how to reconcile the freedom of expression and the realities of abuse. The Internet is, indeed, a platform. You can't argue the Internet is bad because it's a platform just like a road. If you speed, you will get in an accident. Just like the Internet, if you use it for the right purpose, you get the good, or right, results.

We have scenarios where, indeed, there's a bit of abuse. So I think the future of the Internet, if you're talking about the future, we need to discuss how we assure at all times we get the best out of the platform. Thank you very much.

>> Thank you, Eduardo. It's a pleasure to be here. When we think about what we could add to open and free, I would suggest diversity. Diverse. This is almost like a tricky suggestion because we're in an IGF in which banner is One World One Vision. The tricky part is how to make sure inside the idea of One Vision, we insert diversity as a value, as a goal, for the future of the Internet that we're building in, making sure that we respect regional diversity, gender diversity, language diversity, making sure this is a value that we all share.

I think this is important to mention here because IGF has been serving as a vehicle to bring diversity into the discussions of Internet policy and governance. If we think about it, IGF was back in 2006. In 2006, there was no iPhone. The iPhone was launched in 2007. Facebook was two years old. So it's interesting to see IGF as this vehicle in order to create a forum for dialogue, for a robust dialogue for stakeholders that travels through all the moment where is our relationship with the Internet has been changing from 2006 on.

That would be my first remarks, for us to take a look at diversity and allowing different voices in the future of the Internet.

>> EDUARDO MAGRANI: Excellent.

Ms. Sissons.

>> MIRANDA SISSONS: Thank you very much.

( Speaking in non‑English language )

>> MIRANDA SISSONS: What else would I like to see? An Internet that's grounded in the world's strongest, fairest, and most global decision‑making framework. One that can encounter and deal with diversity. For me, that is the global human rights framework. Now, why would I bother saying something so obvious, especially when Fabrizio has already mentioned it. Even the noise that surrounds the many debates today and legitimate concerns today, I'm not hearing or seeing many other players say that. I'm not just talking about freedom of expression. I'm not just talking about U.S. devotion to First Amendment principles. But I'm talking about a time‑tested, country‑tested, legal and normative framework that's been with us and signed by and agreed to by most countries around the world for the last 60 years. It's there for a reason. It's there because it is a way that we can debate and answer and make decisions and do trade‑offs about the roles of different actors, about how we accommodate sensitivity to culture and language with the need to be citizen‑centered and with the roles and duties and governments.

Let's be clear that our free and open Internet is, indeed, potentially splintering. Civic space right now is narrowing, not increasing. In six in 10 countries are repressing rights to assembly, association, and freedom of expression that would allow their citizens to say what kind of Internet, what kind of life, what kind of societies they would like to construct.

So I don't mean human rights as a rhetorical framework, it's an annoying self‑righteous claim that people like to do. I mean the tests involved in the juris prudence. In national legislation and over the guidance of the human rights committee on Article XIX of the international covenant of civil and political rights where there are limits to freedom of expression, over things like public health, public order, and the rights and dignities of others. To frame those on four main principles, is the restriction lawful based in the law of the country? Is it legitimate, meaning: Does it have a lawful purpose? Is that law accessible, clear, and precise? Is that restriction necessary? By necessary, I mean to uphold the rights of users or citizens, in this case, in a democracy. And is it proportionate? That is, is it the narrowest restriction required in order to achieve its goal?

Because none of the debates we're seeing now are completely new. Their speed and complexity may be terrifying, and I fully confess to agreeing with that, but this is part of a great global dance that many people have been engaged in of citizen expression, of political freedom, of security and accountability, and regulation for the last, at least, 60 years.

And so we're, perhaps, in a few phase of that dance, but it's requirements, while complicated, I think are quite clear.

>> EDUARDO MAGRANI: Thank you very much for your inputs on this. So this is a very good oversight on the principles for each stakeholder. It seems to me that the future of the Internet is going to be a very challenging one. Yes. Because, although you represent different regions, different cultures, and also different places concerning the different stakeholders, we have some overlaps on some principles and values but also different priorities. It's going to be very hard to cope with different priorities, right? So many interesting things came up, how to cope with all of that.

So open, free, and what? And this is the what for each one of you. What I would like to address now is a kind of self‑reflection, self‑evaluation of each stakeholder. What are the steps you're doing so far to fulfill your vision. What are the strategies and also the aftermath? What is going right and wrong on your strategy so far to fulfill its vision.

So starting on the international U.N. level, a huge effort is being made for international corporation, international digital corporation, what is going wrong and what is going right in this digital development in the digital sphere? We have the Internet in a huge variation. Carlos likes to say this a lot. In less than 20 years, we came from a romantic and enthusiastic point of view from the Internet to a more dystopic more or less. How are things working out? We're having a more polarized Internet where the radicalization of these courses are online.

So starting on the international level, what are the steps being taken?

>> I think we're trying to catch up. As Miranda indicated, the Internet is a technology that's spread faster than any other technology in human history. It's taken just 25 years to reach 3.7, 3.6 billion people. That's never happened before. And that has happened largely thanks to the initiate tiff and the skill of the private sector. Very few technologies have been so far away from governments. Governments have been caught unaware and have been trying to play catch‑up. Just five years ago, Cohen and Schmidt talked about the Internet as one of the greatest experiments in ungoverned spaces, one of the greatest experiments in human history in anarchy. They also said it was the site of incredible scams, the site of great abuse. It was also the site, for example, of the spread of violent extremism.

This, I think, brings me to a key point. The opposite of freedom is not governance. The opposite of freedom is tyranny. Where you don't have the rule of law, where you don't have good governance, you have tyranny and abuse. I think that's a little of what we've seen. Of course that regulation ‑‑ and in Europe, that's known at least since the French Revolution. I think we've seen a bit of that on the Internet, what an ungoverned space can look like, in a negative form.

I think the reluctance to savor that is what has led very much to the fragmentation of the Internet and a tendency to push back against its universal nature. So I think relation and governance has to happen at the regional level. Europe has led in that regard. If you want to maintain this tool as an international global tool, really as a global public good, we also need to happen at the universal level, at the international level. And that's what we're trying to get discussions around at the U.N. The high‑level panel on digital corporation, which is truly multi‑national and multistakeholder, it had human rights activists, academics, government members, and they came to a consensus on the way forward. I think that was a first stab at that.

I think the need for that is broadly recognized. I think the great difficulty we have now is in the current international environment, which is characterized by extreme rivalry, extreme distress, and privileging competition over corporation, it's very difficult to make headway. There, I think the voices of those who are really advocates of keeping up an open and global Internet need to speak up louder about the need for greater universal norm setting precisely to uphold that.

>> EDUARDO MAGRANI: Thank you very much.

So we heard a bit on how the U.N. at the international level is doing to reduce fragmentation, polarization, and it's a huge effort to cope with different priorities. Right?

So what I would like to address now, as a representative of academia, is what is the role of academia and how academics can achieve impact with their outputs. How can academics really reach policy impact to try to enhance this international digital corporation? I know you're very much focused on cybersecurity, but you can also get broader than that.

>> Thank you, Eduardo. So I think my what was, just to remind everybody, was security. Security is kind of in line with what I do. My work evolves around cybersecurity strategy of small states. I'm from Singapore, by the way. I'm also looking within the strategy about the resilience aspect. I'm not only looking at the systemic resilience of the hardware and critical structure, but the resilience of the users. I'm talking about psychological experience of the end users, people like you and I.

I mean, if you think about it, we're all nodes, endpoints in this digital‑connected web, which means we are, firstly, a vulnerability in and of ourselves, but we're also the first line of defense. So with increasing connectiveness, with more smart devices, our threat surface is just so much bigger than it was, you know, in the past. And this is the thing that people ‑‑ because we're so used to using smart devices. We're talk about the Internet of Things. My one country wants to be a smart nation. That's the goal we've set for ourselves.

Then we find ourselves thinking about, okay, so how in my work ‑‑ part of my work is informing policy advisory that the Singapore government will have to think about, will have to create. And a lot of the work that I do and that my colleagues do, we realize that in some way it's going to inform a very practical approach to help people who are dealing with this. The challenge is to not get lost in the technical detail, not to get lost in looking out for all. While we have to warn about the harms and threats out there, but also to provide, these are the things you can think about. These are the solutions you can throw at this. Increasingly, we're seeing that this is not something ‑‑ this is not something that the military of a nation can do. It's something that is beyond the government, and it really has to be. Because we're so connected, everyone has to play a part. At one point, we're talking about a whole government approach. Then, no. No. It has to be a whole of nation.

Getting people to be involved in the whole of nation enterprise, this is something that most countries are thinking about. How do we engage with people? The mindset is this is an issue of defense. It's an issue of security. Therefore, the government should take charge of it. It's overcoming, that mindset.

I've sat in a few sessions today. I've heard people about we need to do education. A lot of wish, we should, we must, you know, break down the silos. Recently, three weeks ago, I was in Estonia. I was talking about people involved with the cyber defense unit. These are people that are regular people, techies, lawyers, who when there's a cybersecurity incident, they rally to the cause. They're like the first responder in a sense.

So civil society, I think, plugs the gap where you have academics. When they work with society, it's quite powerful because it brings down to the nuts and bolts, to the grass roots level, where it's about making this accessible to your grandma, grandpa. We talk about protecting the young all the time. Let me just remind you all that the young look at the world very differently from all of us. They consume and they absorb information completely differently. We're very lineal. We can bounce along, but the young take from everywhere in a random, distributed function. I'm worried about protecting grandma grandpa and my aunt and uncle. They're the ones getting scammed too.

You have to remember the young will take care of themselves, but I believe that because they are digital natives. We aren't. Most of us in the room ‑‑ not to insult anyone, but I think most of us, you know, we were more ‑‑ we grew up analog and converted to digital. So that's my two cents worth.

>> EDUARDO MAGRANI: Thank you very much. So you believe academia can play a role in shaping an agenda for the private sector and at the international level and governance and so on. It's interesting how you brought up the communications skill as part of this, right?

So to Mr. Carl Buhr. We had hate speech in Germany. We also had the copyright. Imagine how we look at this regulation and influencing other regions. If you're also taking into consideration other inputs coming from different regions to build these frameworks.

>> Carl BuHR: I don't know how to cram everything in three minutes, like the other speakers. These very large countries, we all are the same challenges. That's very clear. As you've described it beautifully, it's about individuals. I think that goes well with what we said earlier. We're asking ourselves the same questions. We have the same challenges, how to not just go about talking to the countries but also to the people in the end and the technologies that will help these people. Many of these things are not about the difference in value and what we want to achieve, but how to achieve it.

There are large discussions in Europe about what should the law say in the law book in the end, and what will be the impact of that. The discussion is at that level and not what we want to achieve. Many teams, people can agree on that. That's the thought on that.

I wanted to use this opportunity to commend the work that the U.N. has been doing and the digital corporations we support to get to the next level. The IGF history, the Internet becomes more and more important for all of us, for important for the economy. Some of the challenges that exist have already been mentioned. For that reason, we think it's very important that the multistakeholder approach is preserved and also developed further because it cannot be ‑‑ you kind of have a complete disconnect between discussions where everybody is at the table and then decisions which are taken in a much smaller group. I also wanted to link that to cybersecurity as well.

One of the big challenges in Europe is you have the governments who think it's security or defense or us over here in the interior ministry. Then you have other people, like the industry people, who want to entice the economy. Then you have the research people who sit on a large amount of money, in many instances, for public research.

They often agree on goals, where should the money be spent, what are the difficulties? We have legislation in the pipeline where it's difficult because in the various European member states, this discussion is still ongoing.

Now, to your question, more specifically, of course, we're aware that if you talk about international system, also the trade system, by the way, and also the Internet that, regulation in one place, especially if it's a rather large space, has an influence on what is happening elsewhere and what is being sought elsewhere. You have businesses that want to standardize the procedures. If they run two or three different shops or 50 around the world, that's very different. They're looking for consistency. That's why many rules have changed.

I was there when we drafted be GDPR. Some of those words in there are by me, I'm afraid to say. This is 2011. This kicked in last year. This took seven years from the first proposal to actually becoming a law. And the actual implementation by companies and businesses and individuals will take time. It's a huge endeavor. It shows the stakes. It shows you have to get it right. It goes back to what I said earlier, that it depends on the individual rules in the end. What are the behavioral legislation, the behavioral requirements you put on companies and organizations, for example, and what would be their impact, and how does that link back to the freedoms and rights and values that you wanted to defend.

If you make rules that ends up with people mindlessly agreeing to data processing because they're asked 20 times a way, you can wonder if that's really achieving the goal. That's the kinds of discussions we're going to have a lot.

Going forward, we want to export a democratic open society vision of the world. We're not ashamed of that. That's why when starting from human rights and the fundamental rights charter of the European Union, we put these rules on the book, and we're very happy if other parts of the world like them and try to apply them as well. Data protection is just a good example. There it's linked to the economic requirements, the economic opportunities of companies in third countries who have chosen on the align their framework with the European one to make it for seamless in trade relations. We think that's easier. It's easier for citizen goals to be achieved, citizen privacy. I think I've doubled my time allotment, so I'm stopping here. Thank you.

>> EDUARDO MAGRANI: Thank you very much.

So, Mr. Bagiire, I would like to hear your perspective. Since we're hearing a lot about the international efforts and also some strong legal frameworks coming from the European perspective, do you think the future of the Internet is coming from the North, and how do you feel this presentation from the south region, the African region?

>> VINCENT BAGIIRE: We all know the origin of the Internet. I think what is important is the processes around it, of governance. I must confess, yes, we do contribute, as Africa, but not as effectively as we should. I think perhaps we are to blame for that ourselves, more for the fact that we have not put enough resources in the processes. If you assist the participants in this IGF and you try to see how many people are from government, I think civil society outweighs the number of government from the continent. That's a reality. I say that because from my own country, I'm the only one from government, but I've seen like six people randomly from my country.

What is true is that the return and make every effort to influence the agenda. It's not as easy as it should be. I think it's important for the government to pay attention to the processes so they can participate and associate with the issues as they are.

It is true at the round for the World Summit For Information Society, there was a report written by the organization regarding African countries that do not participate in things like this one. I think that's an important part. Whereas we contribute to the infrastructure, the process is as important as the development of the infrastructure. I can say that in some instances, sudden aspects have been domesticated. In our case, we have a Data Protection and Privacy Act. It took us seven years as well to get that done. I don't know if that's the magic number. We have that, nonetheless. It was instituted in February of this year. Basically, it mirrors what is happening in Europe. I think that's something in the right direction.

But overall, I must emphasize that the processes, as they happen here, are extremely critical, and it's important that we in Africa pay as much attention and participate aggressively.

>> EDUARDO MAGRANI: Thank you very much. Carlos Affonso, as a representative and a voice on global efforts, how do you discuss the role of civil society? Carlos was one of the builders of building the Internet. It should get more viral. This is an interesting discussion also. He's an important voice. How do you perceive the role of civil society in Latin American context.

>> CARLOS AFFONSO: When you say builder, I am not an engineer. I'm a lawyer. You might not be used to have lawyers being known like that, but super quickly, when we look at the situation in Latin America, I think it's interesting for us to step back and look at what happened on the last edition of IGF when president of France, Macron, made a statement. He made the case for Europe to step up with its own values in order to provide almost like a third way, or third alternative, as inspiration for future Internet regulation.

Looking towards other regions, especially Latin America, we cannot only think if those are the three viable alternatives. Especially in a moment where Latin America goes through very troubling times in terms of the political landscape changing a lot, I think this is the moment in which we need to be super cautious on how do we plug into the debates of Internet regulation, especially to avoid the usage of Internet in order to produce something that we can only call it digital populism. This is important for us to take caution on.

Speaking of which, when we talk about the future of the Internet and the regulation, I think it's always good to go back to what was said about the end of the '90s, in this beautiful article "Law of the Horse," where he talks about good policy on the Internet needs to take a legal point of view, the economic forces, the social norms, and the architect of the technical aspects. And I really think that this message was important back in the late '90s, and it's still, I would say, even more important today. What we see nowadays is that there's somewhat disappointment that people might feel with the direction the Internet is heading. It might lead people to think that improving a legislation, it will magically solve all the problems, and it would steer the Internet back to the good ol' days. The law itself is not going to solve this problem. We need to look on the economic impacts. We need to see how society reacts to one specific change in technology.

We need to count on technology itself in order to help us to steer us back into a good direction. Since we have mentioned very briefly artificial intelligence, I know this panel is about the future of the Internet, but I think for us to keep on the back of our minds, the idea of artificial intelligence might be quite useful because what we see right now is that some governments are trying to go ahead and legislate on issues on artificial intelligence that are simply not there yet. It looks almost like the governments are saying, We missed the train on the Internet. So the Internet is happening without our knowledge, but now that we're aware enough on this tricky issue of technology, we will not let AI run amuck. So, I think this is interesting for us to pay attention in order for us to see what would be the balance of its regulation. Are we going fast enough, regulating things we're not aware of.

Those are my remarks on the second round.

>> EDUARDO MAGRANI: Excellent. Thinking about the good ol' days, Ms. Miranda Sissons, how do you perceive these companies as a self‑reflection and self‑evaluation? Do you think that companies are doing a good job in achieving human rights and fulfilling these visions and principles? What are the steps taken and strategies?

>> MIRANDA SISSONS: I'm sorry I only have three minutes. In one sense, no, obviously not. It would not be credible to say, yes, in a room like this with public debates going on right now. But my position as Director of Human Rights Policy and Engagement At Facebook, Inc., which I started just four months ago was a direct acknowledgment that Facebook, in this instance, in its human rights assessment of Miramar and what happened in Miramar, it needed to the more.

If I'm talk about the existing human rights framework, it's a crucial element to be adopted in these discussions of universal and domestic frameworks for thinking about the future of the Internet.

Tech companies have some strong obligation ‑‑ well, some tech companies have accepted some obligations, and that is through the Global Network Initiative. How many people here have heard about the Global Network Initiative? It's a flaw, not a ceiling where the big ones like Google and Facebook, have accepted responsibility to protect scoop‑ups of data and government suppression of freedom of expression using their companies. That works pretty well. There are a lot of companies involved but a lot that are not. There's a flaw. There's a demand for more than the flaw. That the human rights impacts of all social media platforms and tech companies, it's advanced well beyond that concern with the Article XIX flaw. And there's a use to protect the framework. They're more articulated toward the mining companies than they are for Internet, social media platforms for other company where is you might have 500 products in the pipeline.

You might not formally enter or exit any particular space. So what does due diligence look like there? Obviously, it could look a lot more like what has been done, but how do you make that decision making count? That's what I'm trying to do, lean in. Make sure that happens in critical situations before we go in. It's not enough, though. I don't want to ever pretend it could be enough.

That's why, I think, with Fabrizio going back to the discussion following up from the report of the universal values and the universal approach with the involvement of all the different actors but specifically recognizing that this is an ecosystem where we have ecosystem problems. We have Internet problems, platform problems, social media problems, government problems, and we do need that ecosystem approach.

So I'm not here to say that the platforms are perfect, but I want to draw attention to ‑‑ roughly 40% of my time since I came on has been involved in specifically keeping freedom of expression open an Asian company that's in the bottom five of the world for freedom of expression. That's related to the 75 million people using that platform.

So those games are not over. Although we're not talking about them in the debate, it should not exclude us from talking about the other challenges that have been articulated. Those challenges still exist. That's why I think the human process is problem, if handled well, is a very good way to bring those broader problems and challenges into the game. Again, not abandoning it. We can't have a libertarian Internet. We probably don't want that libertarian version of the Internet, but we want an open and free Internet.

>> EDUARDO MAGRANI: Thank you for all of your inputs. We have time now for some questions and also some reactions, if you want to. So you must already feel warmed up right now. So the microphone will circulate a bit. Maybe we can hear two questions.


>> AUDIENCE MEMBER: Hi. I'm from Sri Lanka. Governor Nance is critical. We need good frameworks, blah, blah, blah, but there are two problems at play, which are implicit in what you said. The first is that it's difficult to link meet space identity to cyberspace space. That's partly a legal set of policies missing, but there's a whole gap there that's rather difficult to tackle.

And the second is most of this very classical type of governance that we're talking about is built on the legal monopoly of violence. If you do not agree to this international agreement that we've designed, we're going to lock you away and throw away the key. How do you establish this when you can't establish identity or inflict physical violence, the Asians using this for evil, are there serious efforts where you're working with technical actors, where you're working with, you know, legal experts, where you're working with the practicalities? Otherwise, all of this just boils down to nice pieces of paper. Right? It just ends at wonderful discussions of libertarian ideas and our lives are not our own. From womb to tome, it revolves around others. How do you resolve the practicality of it?

( Overlapping speakers )

>> Everyone raised these points at different times.

>> EDUARDO MAGRANI: Let's get one more.

>> AUDIENCE MEMBER: Thank you, Alejandro from the University of Mexico. I would just hang on to one statement that was made by Mr. Fabrizio Hochschild, which is the words "ungoverned spaces," the Internet as an ungoverned space. There's no human being or agent that's not under some national authority. To hear representatives of government speak of ungoverned spaces means that the people are ungoverned, firms are ungoverned or maybe even governments are ungoverned. What we're seeing is scaling across jurisdictional identity, management. You have friction loss. You have barrier lowering. You have memory effects. But in the end, almost everything we see on the Internet is not original from the human contact point of view.

>> Maybe let's take those two, and then we make another round.

>> Not so much would I respond but talk around it because I see some analogies I the take about AI ethics and the values of artificial intelligence and what we expect from it. It's very difficult to talk about the Internet without talking about artificial intelligence. I know there is a lot of conversation internationally about principles of AI use, principles of AI deployment. There's like 80 principles have been ascribed it to. Maybe because we watch too many Hollywood film, and we have too much fear of what AI will do to us, run us out of jobs, take over our lives, but we've let the Internet take us over. Why are we still having this conversation about values where we should be better at this by now? So that's just my ‑‑ that was a kick in my gut reaction. Carlos, I agree with you. I'm a lawyer too. I admit to being a lawyer. We don't have the responses ‑‑ we don't have the solutions for everything. Please, that's not the idea.

We talk about having a conversation. We talk about breaking down the silos, multistakeholderism, et cetera, but we have not addressed it at a specific level. There's no going back. There's no undoing the Internet. Sorry. We are on this track. The horses bolted. The horses have run. We have to talk the horse into doing the things we want it to do.

There's going to be an issue with finding common values. Let me be very clear. We all come from different cultures, heritage, different political situations. We can find some common ground, but at the end of the day, it's going to be battle over values and how we ascribe the principles in which we say which values are important. Sometimes, I feel that in this conversation about values and principles, you know, this a utilitarian approach? We look at the conversations we're asking and who are we going to be loyal to?

You said something about doing no harm. That's the Hippocratic oath. I'm totally on board with that. Who am I going to mess up? Who am I going to hurt in doing something like this? That's just my comment.

>> I wanted to agree with the gentleman from Mexico very much. That's exactly why it's so decisive to discussion what the rules will be and how they are going to operate. I hear this often, that people say, Oh, this is a lawless space, and it's done by people who have a special interest for rules to come in or not call in. We call them lobbyist in Brussels. We have many of those. It's a side show. What's clear is it is a governed space. How is it governed? What is driving those regulations?

Are we trying to kill the AI before it comes up? You just mentioned it. I don't think we're there, in terms of defending against the scary robots. I have a bigger concern that, we don't try to regulate something we don't understand because it's the early days, and then we cut off developments. That's a big discussion we have right now. It's true that we're talking about citizen centric, and citizens have concerns.

I'm not a lawyer, by the way. I am an engineer. I understand part of these concerns are invalid, but they have an effect on the complete situation, and they drive how people vote, and they drive what people find important. Therefore, it needs to be discussed. In Brussels, the last two years was to work on this ethic topic. Since we had the nice conversation, let's make this a law now, the ethics about AI. The discussion only starts at that point in time. It doesn't end. That's where we are right now. I think the next three or four months will show how this discussion goes forward. I'm quite confident we'll have a very sensible thing coming out of this.

I also wanted to address the statement from the gentleman from Sri Lanka.

There are rules from the analog world. There's room to say at the starting point, there's rules to follow online. In Europe, we've created an electronic framework for electronic identity. These need to go together. The framework to work with identities of people online. By allowing a platform to say, My business is so essential, linking to the physical world, I need to know who everybody is.

There are many companies that don't need to know that. What we're discussing is to go the next step and say, Does it always have to be black and white? Do I have to be completely anonymous and the other side knows nothing about me? There are certain things that have age restriction. I think Facebook is one of them. There's a certain age that you should have to use them. You know, why would you have is to tell that platform everything about yourself, including the address and maybe send a picture of your face. It may be enough to proof that you have that age. This is one example, but you can see that in the use cases.

What we're doing is trying to build a framework for that to happen again, governments, services, companies who want to use it. I also want to be clear that there may be some service where is it might be obligatory. Sometimes it's clear the state needs to know who they're operating with.

Final word on your statement about the violence from Asia. We have this problem all the time, to know who did it, in the end. Oftentimes, the attribution problem. The issue is to close the door so the next guy is not walking in there. I don't want to deny that there's a difference between online scam and physical violence, but I'm just saying that there is a challenge, of course, but I don't believe that you will close it off by making a law that says everybody needs to be identified online because probably the people who want to do something bad will just ignore that.

I think we live in this trade‑off.

By the way, a final word, this is an interesting trade‑off in the discussion between the law enforcement interest. From the law enforcement side, it's always better to have more data because you may need it at some point in time. But from a citizen's open‑society point of view, you need to start from the other side around.

You just said it. I don't think we'll solve this problem but we'll continue to have these two perspectives clash and, you know, hopefully in a fruitful way and lead to compromises that allow us to go forward and also to allow us to show and prove to our citizens that they are safe. I also agree with you, we won't be able to get behind this. Nobody wants this. This is one of the greatest things humanity has done, in my personal, humble opinion. We should do what we can to protect it.

>> Just on this ungoverned spaces, I was actually quoting Eric Schmidt and Jack Cohen. Of course, there's some anthropological and philosophical sense, not an ungoverned space. There I would agree with you. But there's a different level. That's not what I was alluding to. I should suspect in this room, over the past 20 years, every single one of you has got ‑‑ if not one but dozens or hundreds of emails from somebody whose father just died, and they were a bank governor of such and such bank, and they have $10 million. And if you would only get your bank details, you will get a 10% cut. I'm sure nobody in this room ever responded, but those emails go to a million people. And maybe one, probably somebody my age, so not digitally versant applies. Has anyone heard of such a scam, the person being prosecuted? I haven't. That's what we mean by lack of governance.

Remember how difficult it was for Osama bin Laden to go out and reach hundreds of thousands of people and try to recruit them, how many legal hurdles he would have to evade and overcome.

With the Internet, very easy.

That's what I mean by ungoverned spaces.

Think about hate speech, expressing the most obnoxious views ever and trying to do that on a mega scale. In the past, you would have to use radio or media. How many newspapers with any reach would accept you reaching a large public. How many radio stations with any reach would accept you reaching the public? It would be moderated, regulated on the Internet. It's not. That's what I mean by ungoverned spaces.

In terms of data, I was with a major AI company in Japan. And I said, What regulations do you follow when you handle data privacy.

They said, Well, in China, we follow the national law. In Japan, we follow the national law. In other countries, there's no national law. That's what I mean by ungoverned spaces.

I think the problem comes from policy makers not having kept up. And this goes to the point that was raised by the other speaker. Engineers move fast and break things. Engineers find solutions. They work through trial and error. They may get things 500 times wrong, and they get it right. And they learn. Policy makers are paralyzed by difficulty, and they don't want to do anything until they have the perfect solution. The technology has advanced to the huge speeds, and the policy makers are desperately trying to catch up and have not done a very good job. That's where I think we need to do better but not by ‑‑ together with the horse. We have a lot of catching up to do.

>> EDUARDO MAGRANI: Let me get two questions here, and you can react, please. Those two, please.

>> AUDIENCE MEMBER: Yeah. Thank you very much. My name is (indiscernible) I'm representing here the government of Latvia. It's very interesting conversation, especially now with the distance from 2005 where we were talking mostly about access to Internet and a little bit about oversight and government and infinite resources. At the time, we felt this was extremely complex and complicated. Actually, now we see that when we're moving on the discussion, it's becoming much more complex than it used to be and will not get simpler going in the future. So we're doomed to address those complex issues and try to understand them.

Here comes the issue of legislation. I think engineers can afford mistakes. Legislators most likely cannot. It is better to let them understand complexities associated with the issue in consultations with industry representatives, with civil society representatives, and make the legislative proposal rather than rush and do uninformed decision making, which is most likely the detriment of resolution of digital.

Let me leave you can the last thought. Now we're talk about governance of digital and digital as a separate topic. I would bet that in maybe five, seven years from now, we will stop talk about digital as separate but, rather, we'll be talking about everything by default digital.

So, as a result, most likely IGF will turn completely in its nature and will be a cross‑cutting discussion about everything we're doing rather than specifically focusing on digital as a separate topic.

Thank you.

>> EDUARDO MAGRANI: Thank you very much. So here. I would just ask you to be very brief on the questions, please, if the other people also to be able to reply here and the other place. Then you can react.

Another two questions.

>> AUDIENCE MEMBER: Good afternoon. Thank you. My name is Julia. I have a question related to data protection compliance, which was one of the "whats" mentioned in the begin.

Recently, the U.N. released a report about digital interdependency. At some point, the report states that we need to guarantee data protection through two search optimization, but we need to review company's business models. The company's business models does not allow for the use of optimization or compliance with basic obligations such as purpose limitation or even the choice of inadequate basis for some types of certain activities.

From where I see, theory is very far apart from reality still. With that in mind, Mr. Hochschild, would you say what is used by Facebook and Twitter and so on, what are you doing to bridge this gap in terms of accountability and regulation.

And Ms. Miranda Sissons, as far as the usage to spread political ads that do not contain accurate information and used to target individuals who maybe never gave their consent to such purpose, what are you doing, as a company, to bridge this gap and comply with the GDPR? Is Facebook going to position itself against these practices, like Google and Twitter just did? Or is it going to be a stain from taking action?

Thank you very much.

>> MIRANDA SISSONS: Oh, now I get it. Okay. So I'm given the question. What I want to say before is I do think it's very complex, obviously, but I also think we can, in this difficult conversation, despite the complexity, try and adhere to some important principles such as being citizen centered, such as ensuring basic human rights norm that are agreed and put in regulatory frameworks, such as you shouldn't give data to countries that have not signed the (indiscernible) and consent against torture.

We need to distinguish a bit between Internet platform ‑‑ AI, used by some of the platforms is more than a social media platform. I think it's controversial that we could look at the need to ban AI in killer robots. These are all fundamental things, basic flaws that I should give here.

Certainly, Facebook, as far as I understand it, complies with the GDPR. I'm not going to tell you this is all fantastic, but I think it's a pretty important moment to also look at those ad‑targeting practices.

The data processing at Facebook, which has been extensive for its business model is compliant in many ways and a sign of what privacy should be. I don't mean to sugar coat that. I do mean to be deeply involved in that conversation.

On political advertising, Facebook has global policies which is easy and convenient but also very important, right, so we can hold the line on many good policies around the world. For example, the policies that we will not give your data to lawful organizations like other organizations.

It's, personally for me, a very, very important question and a very difficult one. The position that the company has taken has been very unpopular, and they have said they will be looking at this position, but I can't tell you now that it's necessarily going to change. What I can say is there's an active culture of debate inside the company that reflects that outside the company.

There's weak regulate of the ads. Media is required to take ads without fact checking. Given U.S.'s Framework and legal vacuum in terms of legal regarding ad advertisement.

>> AUDIENCE MEMBER: Just going back to the comment of the future of the Internet and seven years from now, we're not going to be discussing this anymore. Since this is the panel to look towards the future of the Internet, I think we can stress the thing that's repeated all the time but is important to mention. Maybe the future of the Internet is for the Internet to be invisible, for the idea that we're not going to speak about offline, online, Internet, or to enter the Internet. It will not even be an expression that will be understood by future generations.

When we end up doing this, there may be traps we need to avoid in the so‑called disappearance of the Internet, which is for us not to pay attention to the features, the characteristics of the Internet that end up being on its essence and leading us to the point where we are right now, which is the fact that the Internet is a network of networks, the fact that the Internet is decentralized and how much of that is important to the whole development of the Internet, the idea that the Internet is open in a way that it allows permissionless innovation. So those key features of the Internet, I think, will be more important when we look at the future of the Internet and especially when we want to do this comparison with other technologies such as AI, like we're doing right now. AI is quite different from many things in terms of regulation.

For instance, the capacity of AI to produce fiscal harm is bigger than the one you may find on the Internet in general terms. When you talk about AI, you have the debate on access. The access to Internet is something that is, of course, still very important. It has been, in the last decade, predominant. When it comes to AI, are we talking about people impacted by AI deployments even before we started addressing issues about access.

Today, the difference between those who will be able to have access to those advancements and those who don't have, it will make the way that you work look like you were in the stone age in comparison to the others.

We were talking about quality of life, longevity of people, well‑being. I think this is very important.

And just to conclude, addressing the gentleman from Sri Lanka's comments, if we take the issue off this information, if we talk about approving a law to fight this information, it might be one step. How will you do it? How will you frame it? If we take a broader perspective, we'll see that law can provide inputs, of course, but we need to focus on the economic aspects. We need to understand economic chain, value of producing fake news, who is profiting from that, where the money comes from, what are the economics of producing false news. We have to take a look at the social aspects, which is literacy, how people understand what is false, what is not, what they should share, what they should not share. There's a social component that's important.

And, technology itself, it can be an important tool in avoiding some of the negative impacts of this scenario. Just to give you an example, our institution in Brazil, ITS, has developed this tool we call Bagobotch (phonetic), which could be translated as (?) Bot catcher, which is a meter of some sort that you can insert a Twitter handle and see how likely this Twitter account is automatized or not. There are a bunch of other tools that end up doing the same thing. I think going through this for areas, thinking about Internet regulate might be a good way to move forward with practical results.

>> EDUARDO MAGRANI: Thank you. Do you want to react very quickly?

>> Just two points. On data measures, I think the practices between companies differ enormously. That's my understanding. I think certain regional regulations that in one way or another have a global reach because other countries are imitating them or other countries want to do business with that region, like the GDPR, some say it impacts 120 countries, it's transforming that.

I'm sure this is not universal, but businesses are looking for more universal normative approaches because, obviously, it makes for much easier business approaches if they have one set of legislation or approach to apply by globally rather than having to adapt their approach country by country. I think we have to continue to globalize best practices, but that will take a concerted effort of governments and of the private sector to help happen.

One word on this pacing challenge and the difference between technology charging ahead and legislators dragging their feet. I think that is the case. The sad thing is legislators do act quickly, but usually after catastrophes and not before them.

The Security Council in 2001 adopted binding resolutions on the use of Internet against violent extremism after the London attacks. The Council of Europe adopted their first ever additional protocol in 2015 after the Paris terrorist attacks. There was action at the national level and international level after the Christchurch attacks. It would be nice if some of that speed could happen before rather than after the tragedy.

In the digital era where technology changes the world, it's an unprecedented speed. Keeping up the same old‑fashioned speed of legislation just won't be good enough. We often need to grow more flexible, more nimble, those of us working in the public policy field and not just let technology move ahead into the next century while we're still stuck in the past.

>> EDUARDO MAGRANI: Excellent. Thank you very, very much. Unfortunately, our time is over. So you can definitely reach the panelists on the corridor.

So this is how the future of the Internet is going to be, with some overlaps concerning the guiding principles and values but definitely different priorities. So since we talked a lot about AI, maybe the robots can decide on our behalf. Thank you for our speakers, and thank you for all of our contributions. Thank you very much.

( Applause )