IGF 2020 - Day 4 - NRI Technical aspects of content regulation

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 




>> PETER KOCH: Okay. So, with apologies for the delay, I think we can start this session.  My name is Peter Koch, I'm with the steering committee of the German IGF, and I'm a member of the technical community in IGF terms.

I would like to welcome the attendees and panelists for this session.  Before we start, I would like to give a short statement.  As many of you have heard earlier today, we are missing Marilyn Cade, who passed away yesterday.  Marilyn was an intensely engaged, passionate supporter of the Forum.  In particular, she was a member of many NRIs and a supporter of many of us across the globe.  We would like to dedicate this session to her memory.  Thank you.

So, I would now like to introduce the first panel.  We have contributions from four NRIs from France, from Brazil, from the United States and from Northern Macedonia.

I would like to give the floor to Lucien Castex, who will report from the France IGF.

We will have roughly three minutes per speaker.  This first session will run until roughly 1845 UTC, and we'll have a discussion afterwards before we go to the second panel.  Lucien, the floor is yours.

>> LUCIEN CASTEX: Thank you, Peter, for giving me the floor.  Well, three minutes is quite challenging, indeed.  Try to sum up a bit what is going on in France and shed some light on obviously the European Union framework.  The first point I would like to speak of is basically, we have a reform going on which is the digital service act.  The current framework was changed since the e‑commerce directive, which was adopted in the years 2000. And in the context of a fast-changing digital world, basically, the reform is going on, and there is a lot of debate in France and obviously in the European Union about the Digital Service Act.

Well, basically, the idea, and still a draft, obviously, is to have new rules, for example, for gatekeepers, new entrants, observation, and also consumer choice and tackle the emergence of new actors, basically, online platforms and other big tech.

In France, we have a few initiatives.  First, we have the Digital Law aimed at combating false information online.  Passed in 2018, it aims to aid the removal of fake news particularly during the election campaign.  It's creating news transparency for the digital platforms, including GT, a corporation, to basically force online platform to introduce measures to eliminate fake news and this is basically the main idea.

The French regulators, the (speaking French) is in charge of checking for compliance.  And there is new legal injunction aiming and stopping the spread of fake news.  Basically, an interim judge will qualify what is fake news, and it's clearly quite difficult, and then, if it manifests and if it's disseminated deliberately on a massive scale, it can order its immediate removal.

So, second piece of legislation, very quickly, that I wanted to shed light on is basically the debate on speech and combating online hate trade.

We had a law about hate speech, which was usually called, (speaking French) but the French Constitutional Consul declared that the law wasn't Constitutional.

Most provisions, are basically invalid on the ground of freedom of expression, saying that freedom of speech and communication are not necessarily appropriate and proportionate to the aim, pursuit, and so most of it is canceled.

Last point.  We have a few points on terrorism.  Basically, there is an ongoing reform on that point but it's very early in stage.  It's just being debated.  So, that's about it.

>> PETER KOCH: Thanks, Lucien, and indeed, we have a bit more time than I gave you in the first instance so if you have anything to follow up, you can do that now or in the discussion.  With that, I would like to ask Diogo Cortiz to share experiences from Brazil, please.

>> DIOGO CORTIZ: Right, so, good morning, good afternoon, good evening for everyone.  Here in Brazil, content moderation is a big topic now because there are a lot of events, and elections, that some fake news, misinformation content spread out of the network.

Okay. And also, some toxic contents being published on the web, especially on social media.  So, we need to try to find some way to deal with those challenges, especially misinformation, and also toxic contents, what we usually call hate speech. 

Here in Brazil, there is now debate on Congress, there is a bill that's called like Fake News Bill, to try to deal with this subject.  Especially, regarding messaging services.  For example, WhatsApp.  So, in the initial proposal, the idea was to kind of collect and store all the myths so from WhatsApp, so, in some way, to have a mechanism to track when and who we started fake news but it's not well accepted by the technical community.  There are a lot of issues on that proposal and this bill is under discussion in Congress.

Actually, I'm from the technical team so we are also investigating many different tools, many different technologies to help to deal with content moderation, especially I'm leading a project using machine learning and natural language processing to detect some kind of harmful content on social media.

And here, we are discussing a lot about some technical challenges and also ethical challenges, especially for our reality here in Brazil.

Actually, I'm working very closely with people from UK.  I was a visiting professor at Queen Mary University of London and for example, just to give an example, to train a module to recognize, for example, hate speech, you need like local data because you have different languages, but you also have different realities, different cultures, and so on.

And just to compare, for example, the teams in UK that were working with English, they were working with a data set with about 100,000 sentences.

For example, so, that could be used for machine learning, for training machine learning.

And here in Brazil the biggest one that we have now days is like 5,000.  So, it's pretty small.  So, we have what you call low resources to train all those models.

So, now, we are trying to invest our efforts to create new data sets together with some partners but we also have some problems regarding some ethics and also legal challenges because from the ethical problem, that we need to collect real data from social media because I cannot train a machine learning model to detect fake news or hate speech on social media using, for example, sentences and texts written by journalists.

Because the way that it's written is completely different.  It's like, canonical and noncanonical.  So, we need to collect the data.

So, here, we are facing also ethical challenge about privacy because some social media, it's okay to collect data using the API, but there is some ethical problem that this person published, for example, a sentence, just to express their feelings, their opinions and now we are collecting to train a module, for example, to do something.  So, it's like an ethical challenge that we are also discussing here in Brazil, and I can go further along in my next round.  Thank you.

>> PETER KOCH: Thank you so much, Diogo.  And this brings us to a slightly different theme.  We've heard about platforms and the way to assess and deal with content.

Now, Melinda Clem from the U.S. IGF will talk about potential standards for content regulation that would balance human rights and freedoms online and she will also dive into the question how content regulation practices maybe interfere with internet identifiers.  Melinda, please.

>> MELINDA CLEM: Thank you, Peter, and thank you for your kind opening remarks about Marilyn.  She's very much in all of our hearts, including Nola Cat here, a big friend of Marilyn.

Thank you for the time.  Obviously, this is a hot topic in the United States.  It was something that we covered in IGF USA this summer.  The guiding law in the United States is under the Communications Decency Act, section 230.  You'll hear that nomenclature.  You'll see it tweeted time to time.  What that does is provide what's called intermediate liability.  It's a protection for specific types of internet computer service providers that include platforms all the way down to the infrastructure layer of domain registries and registrars.

The ability to moderate content, and enforce terms of services with very specific, what are called AUPs, Acceptable Use Policies, those will define what is typically prohibited.

It will be very prescriptive about the types of content that are not allowed.  What we find is the application of those is where the, a lot of the distinction and concerns is applied.  It is how you balance what's acceptable and what's not.  And the, what, anecdotally speaking, I think the easiest way to define a solution here that is going to balance all of our human rights as well as specific legal rights we have in the United States predominantly around free speech is that the more narrow you define a problem, the more you define the content that's being targeted.

The relatively easier, again, relatively being, carrying a lot of weight here, the easier it is to police and moderate, identify that content, as Diogo just mentioned, it's difficult to broadly scan and review but we find it's a lot easier to have a very specific targeted action.

So, for example, you know, throughout all areas of the internet over the last eight months, we've been looking at how do we identify misinformation around the pandemic.

That falls into that bucket of very clearly defined actions that you can take.  You can do things like key word searches, things that have been around for over twenty years, technology that's advanced.

There's ways of looking at if drugs are being sold and looking at certain aspects of the sales process.  You can find, you know, ways that spam is being used to manipulate the DNS that's promoting these sorts of materials.  That's an example of a very targeted means of balancing motive raisin with our rights.

Another one we find works quite well is the eradication of CSAM.  There are several technical solutions out there to date, some, including by Microsoft and Rackspace that have been designed and licensed for free to review and find that federal to help eradicate it.  In instances where those are not used, we do have more manual processes available to us to be responsive to organizations, trusted intermediaries or third parties to help fill those gaps where we don't have technology today and we're still in the process of doing it.

Then that gets us into the broader scale of technical solutions.  And you have more proprietary uses of artificial intelligence.  I think the most popular and known is what Facebook does.  This is how they do a lot of content moderation, producing lists that then human beings can follow up on.  But, again, that is a proprietary solution open to them today.

I do want to note that several of these companies that we'll be talking about and that are of concern, we're fortunate enough to have as participants and members and financial supporters of IGF USA.  It allows us to have some very rich conversations that talk about both the policy and the technology.

This year, we talked a lot more about the policy aspect rather than, you know, to try to define what needs to be done, then we get to the how.

So, that was the majority of our conversation this year.

>> PETER KOCH: Thanks a lot, Melinda.  And this brings us to the fourth report, jointly provided, I should say, of course, from the Northern Macedonia IGF, jointly provided by Anastas Mishev and Boro Jakimovski, it's your floor.

>> BORO JAKIMOVSKI: Thank you.  I'm sorry for the camera.  I had issues.  So, today, what we will be talking about is the establishment of the national e‑learning platform.  Which was a short notice project that was developed during the, during the last month prior to the start of the school year.  It was a technical difficulty put on academia to technically deliver a national system for e‑learning for 200,000 students in primary and secondary schools.

This was mainly due to the elections that were previously scheduled during this summer and there were no decision makers available to support the earlier start of system development and this has proven to be a great collaboration in just one note to develop, deploy, and organize trainings and large, huge technical difficulties of scale‑up, of such a system.

And also, provide additional content as much as possible for the system to start functioning. I must say, it was a great (audio cut out) at companies.  So, I think that we, as members of IGF and being involved in such a project proved that the collaboration is a possibility.  It can provide very good solutions for the community.

And --

(audio breaking up)

I'm proud of the project that was done in very short notice.  And as I said, the school year started on time.  Anastas?

>> ANASTAS MISHEV: I would just like to add that content regulation is also a big part of this platform that what we are mentioning here because of the specificity of the target audience being the youngsters, the pupils of the primary and secondary schools.

So, based on the previous experience that we had that many of those social networks and communication channels were used as a media for distributing material that is not adequate or material that is not adequate for these groups.

So, now, we have to also think about the technical solutions that we also need to implement in these platforms regarding the content regulation.  But, that's only part of it.  I would say the content regulation from the technical point of view for us, at least for us engineers, always seems a little bit easier than the other aspect which is the policy aspect.

And when you hit the lack of policies, the lack of legislative to support all these things, it's much, much more difficult and it shows itself in a worst case during these pandemics being overflooded with all kinds of information, which, from nonrelevant sources using this to gain political advantage, using this to gain any other advantage, shows itself in a very bad light.

So, aligning for me, the biggest issue that we also need to discuss is that how to align the legal and the technical frameworks.  And when does the content regulation stop and the freedom of speech begins?  Thank you.

>> PETER KOCH: Thanks a lot, and thanks to the panelists, we now are back perfect on time.  And we have up to the top of the hour to engage into a discussion.

Some questions have already been submitted to the Q&A pod and there's also been a lively discussion and exchange in the chat already.  So, I suggest we summarize some questions and then we'll open it to the panelists to respond to and see how the audience reacts.

So, there's one request to differentiate between content moderation at the, say, platform level, or content layer, versus moderation or attempts of content moderation by way of dealing with the identifier system.

And that is the internet governance speaks for names and numbers, so, domain names, IP addresses.  There's the plea to differentiate that.  And also, since we are reporting here from the perspective of the various NRIs and some of you already have given the examples, it would be interesting to understand exactly how the NRI could be instrumental in either having a regulation in place or inform the regulation by national or regional legislator or come up with other models of stakeholder response or approach to potential content regulation.

And the last contribution, I think, was a perfect example of that.  Any of the panelists would like to go first?

>> ANASTAS MISHEV: Well, I think Melinda asked to answer the question so go ahead, Melinda.

>> PETER KOCH: Sure.  Melinda, please.

>> MELINDA CLEM: Okay. Thanks.  I'll start with the identifier system and disclose that I am employed by Afilias, the second largest domain registry.  I am the Chairwoman of the Internet Structure Coalition.  This is something we spent a lot of time lobbying and educating U.S. regulators in particular about the extremely blunt tool and instruments that the infrastructure layer has available when it comes to content moderation.

Because we sit anywhere from one to three or four degrees away from the content and the producer of the content, because we're outside of that path of content creation, our only tool is to remove the entirety of a domain name.

So, what that means is we've got, if someone is running a blog service and you have, you know, the example of blog.com, the only example is to take down all of blog.com and every blog that's there to cure one block that is clearly fulfilling the role of blog.com and making that happen.

It does put a different perspective on how we can address the problem, and it makes it very clear that when you come up with solutions, you have to consider both that that category of things that I call, refer to as the operating model that gets into the technical model, how you would proceed with action as well as the business model.  I think the other important aspect that goes unnoticed that needs to be more clarified is that the identifier label, we're not, our only monetization is about selling those domain names and hosting.

We're not in the business of collecting money on promotion, on prioritizing content, in any sort of way so we're not financially incentivized in any way to manipulate content or get you to it faster.  That's not in any way the DNS is built and we're not suggesting doing that.  So, we need to develop solutions, particularly in the United States, there's part of the concern around these proposed changes to 230 which, in general, the sector is stridently opposed to.

What we see as the better solution is again, targeted approaches for specific problems that we deal with standards and norms.  Specifically, again, because we're still in an election cycle here in the United States, we have specific aims taken and enforcement and review done by a lot of parties including the FTC here in the United States and working with the different providers and trying to define what's appropriate and what's not, what needs to be moderated, that sort of specificity, we think is a better approach where everybody comes to the table, defines what we need to be doing and how we can do it together.

You know, to counter specific issues.

>> PETER KOCH: Thanks, Melinda.  So, Lucien, may I put you on the spot to also elaborate on that distinction between platforms and identifiers?  Also, because you have a national regulator to deal with, but also, European legislator and you might have certain experience with that dual intervention.

>> LUCIEN CASTEX: Thank you, Peter.  Indeed, it's quite a challenge.  I agree with Melinda, it's very difficult to distinguish between harmful and illegal content.  Basically, from a legal standpoint, there is a number of different legal areas, hate speech, slander, IP rights, child pornography and so on.  So, in France, it's under criminal law.  It's also under civil law.  Ongoing debates, also, with DSA and all the regulations on tourism for example, that try to increase liability of online platforms.

So, when you are in content, basically, it's a gray area.  You have some content which can be evaluated quite easily but most of it, it's quite complex.

So, basically, we would need an appropriate legal evaluation of the content to determine if it's illegal or not and when you are, I don't know, from the CCTLE, for example, standpoint, you don't have the power to actually establish the legality of the content.

And as Melinda said a couple minutes ago, you do not post any content, and no content actually goes through your infrastructure.  So, from a technical standpoint, you have no control over the content of the website, and taking down the website, well, it's quite a lot, clearly.

So, defining, actually defining legality of the content is a main problem.

And in France, the fake news, so, this information is quite a good example, so, AP law, as I said, was mostly taken down by the Constitutional Consul.  Well, that's it.

>> PETER KOCH: Merci, Lucien.  So, Diogo, may I come back to you and actually ask you to maybe shed a bit of light on how that multistakeholder discussion in Brazil worked from a technical perspective but also from other perspectives discussing the complex system that we've touched upon already now that spans from identifiers to platforms and how did that make your life easier or more difficult when it came to reaching a conclusion?

>> DIOGO CORTIZ: Okay. So, I work especially with the application layer, so, especially with content.  And my objective is to identify or to train or to research, if it's possible, to use machine learning in Brazil to help companies to identify some harmful contents, especially misinformation and especially hate speech.

Because as you may know, we have a lot of platforms in Brazil that are not Brazil based.  So, the team is beginning to reduce it so I'm working right now to investigate how machine learning can help in the iteration process.

So, from the, like DNS point of view, I'm not the best person to give an answer.  But, from the application, what we are discussing here is better.  Machine learning tools can help those platforms to moderate content or at least to, it's like, we use it for a flashlight for all the content to in some cases put a flag on the content and say, maybe here in this content, it's a hate speech.  Like he's racist.  That goes to the moderation team.

But, for this purpose, we have some, a lot of technical challenges that I can discuss briefly here the first one is that we do not have some legal definitions.  For example, for hate speech or for misinformation.  So, to train a model based on machine learning, to help to kind of content moderation, we rely on experts so for who is not familiar with machine learning and process, we need data. 

We say supervised learning so you need data set and the data set needs to be annotated, okay, by humans so one challenge that we have is who is annotating those data sets, for example, to identify hate speech.  And also, what is our model reflecting about hate speech content, misinformation, so, those are the two main challenges that we are facing here in Brazil.

So, for example, the web technology, the center that is hosted by the Brazilian Information Center, we worked with the Queen Mary University of London.

We train a model with a high occurrence.  It's about nine percent to the hate speech, and it's completely open search.  Everyone can use, it's on GitHub now, so people from different companies, private companies and also researching, they are using those systems to find harmful contents, especially on the web layer.

So, that's my main work.  So, the technical content in the layer, application layer.

>> PETER KOCH: Thanks a lot, now, back to the north Macedonia IGF.  Similar question to you, in your discussions, your internet governance environment, your IGF, what challenges do you see when it comes to distinguishing the content layer and identify the layer and complexities and interactions on the technical side.

>> ANASTAS MISHEV: I would like to take a step back.  A very weak point in the internet connectivity which was of course the lack of exchange rate, our traffic was going all around the country for a country that is less than 2 million that is 100 by 200 kilometers in size, the traffic to go all, to use and load the international links was something that was unfortunately happening up to very soon.

And this crisis actually revealed its very weak spot.  Here, since these organizations that was supposed to do that was Monday to do this, did not ever, chances as academia, University, took it in our hands, established the first international exchange point, ISPs maintained, we talked to operators, the CCTLD, and it seems that, the content is local now, the quality of the online lectures is better, which was what was most important from our side.

And then, I believe that our next pointer next natural step would be to go, then, into the, to talk to the operators, to talk to the CCTLD and of course to talk to, it's very important to talk to the policy makers which are also involved in our IGF.

And see this multistakeholder model, find the Best Buy to apply the content regulation, at least at the identifiers level, if not then later maybe using advanced, advanced technologies, even in the content layer, per se.

But, as far as the identifier layers, we are initiating this, initiating this conversation, initiating these talks with stakeholders in the country, especially the operators but now it's very easy.  They are there in our premises present with the Ministry element for the policy making, and the top-level domain in the country.

To see what are the possibilities, first from the point of view of protecting the youngsters from the harmful b content, and then, even make it more even more general.

>> PETER KOCH: Thank you so much.  So, meanwhile, there was, again, quite an interesting exchange.  People are deemed to do multitasking, obviously, which is great.  Interesting exchange within the chat and I'd like to go back to one question that Sebastian Swimmer introduced or maybe shared an observation.  Maybe find it.  So, Sebastian wrote, I understand the technical issues, that's me, for suggesting when this comes to content layers technically or identifying layers. 

And then Sebastian wrote, yet, content moderation is performed in brackets now, you notice a takedown mechanism by the trusted DNS, closing bracket, content moderation is performed, so, it seems the borders are getting blurrier in policy discussions.  Again, that's the borders between content layer and identify layer.  Melinda, can you discuss that from content, IGF perspective.

>> MELINDA CLEM: Sure, so a couple of distinctions.  The dot, we've got to remember there were different regulatory bodies or oversight for CCTLDs and GTLDs.  It's a lot easier for a country code and their governing body to have very specific rules about how that CC is managed.  By contrast, it is fully outside of ICANN's remit who manages the GTLT space to in any way manage content or even beyond in other areas that people talk about having some moderation or very proactive role because they get into cross‑border issues, matters of state, you know. 

Helps with cryptocurrency, and having registry operators manage that, do takedowns, cyberterrorism, again, that's another space where a registry headquartered in a specific country you can adjudicate between to matters of state, some sort of cyberwarfare going on, nor should they.

So, I think first and foremost it's very important to keep those lines very clear and understand who has a remit and where those jurisdictional lines are, and the more we clarify, it's easier for us to start defining solutions and again, what is it we're going to work with.

One final point I had mentioned some of these specific initiatives, talked about DNS abuse, and this is another one where, again, we need to really keep this focused on technical abuses of the DNS, not these content related areas.  Phishing, farming, bots, all these sort of, fast flux, sorts of things we can see and manage and disrupt at the DNS layer, that's where we should focus those and keep those definitions clear and all of that inside of the, clearly, for GTLDs inside the remit of ICANN and manage those and again, not let these lines continue to blur into nontechnical content related matters where we can.

>> PETER KOCH: Thanks, Melinda.  Lucien, before we go to the final round, anything to add to this?

>> LUCIEN CASTEX: Well, I quite agree on the Frontiers, indeed, content and DNS servers are quite different.  Abusing DNS on one side and obviously having content on the other side.  We had a series of workshops in France led by AFNIC, which is a .fr registry, and one of the key points we draw from that was basically, most people don't see the difference, you know, and including, sometimes, experts and legislative authorities and there is a need to clarify who is doing what, clearly, content regulation, legal aspects, DNS abuse, malwaring, phishing, and so on.

And it is basically understanding how internet is working and what you can do effectively to actually have an impact on the regulation that you want to promote and it is a problem of education, clearly, and of understanding how our internet is currently working.

>> PETER KOCH: Thanks a lot, Lucien.  So, this brings us almost to the end of this session.  I see there's another question in the pod.  We'll answer that in writing, I believe.

So, time to wrap up.  The agenda suggests we do that in the reverse order but we don't need to stick to the script too formally.  So, now that we've had reports from various NRIs, and different approaches in a multistakeholder, in their local or domestic multistakeholder venue, the question is for each and everyone, what are the next steps?  So, what are the concluding commitments, as we call them, that are actionable?

So, what is your respective NRI working on next to either improve the understanding or to take your effort to next step?  And let me see.  Yeah, Anastas or Boro, would you like to go first?

>> BORO JAKIMOVSKI: Yeah, so, definitely the collaboration between the academia and government and companies as well will need to continue.  We are currently involved as members of IGF and in the building the national broadband infrastructure so we are talking actually with operators and also with Ministry of Information Science, on the establishment on this national broadband office.  And as well as managing CCTLD, collaborative with managing government, all of these aspects.

So, academia tries to involve thought with Ministries and companies on building the infrastructure and also covering white zones which is also an issue here as well as in Europe so making internet available with the government to the operators to be able to be available on the, where there is no commercial benefit for the operators in order to make it possible for everyone to be available at the same price across the entire country.

>> PETER KOCH: Thanks a lot.  So, I think Melinda is next on the list.  IGF USA, what are your next steps or commitments, even?

>> MELINDA CLEM: We don't have firm commitments.  The plan was to continue the dialogue and host, we want to take advantage of all of the work that went into understanding how to put on a good virtual event and try to have some smaller, single topic events and this was one that we wanted to follow through on again.

We only spent seventy minutes and only really talked about the policy aspect of it.  So, a next discussion around specifically 230 and these approaches is something that we've talked about.

>> PETER KOCH: Thank you and I would like to apologize, I maybe put too strong emphasis on commitment.  The idea in these sessions, and I say that for everybody involved or listening, the idea is that the NRIs exchange ideas here and then also, yeah, prepare some next steps that are like actionable.

So, we say commitment, nobody is going after the NRIs.  We know that most all of the work is volunteer effort and can always be overtaken by events, but providing next steps that will also encourage the interaction between the NRIs because everybody can look at what everybody else does.

So, take it serious, but not too serious.  Nobody is getting after you.  Thanks, Melinda, again.

And next in line is Lucien for the France IGF.

>> LUCIEN CASTEX: Thank you, Peter.  Well, basically, in France, we have still a lot to, a lot going on because we had the French IGF not long ago on October 27th and it's quite a different session, and we have six different workshops to go in December of this year and 2021.  Just before IGF France 2021.  So, it's a lot to do and basically, the idea is to continue the dialogue and raise awareness.

>> PETER KOCH: Merci, Lucien.  And then finally, Diogo, back to Brazil.

>> DIOGO CORTIZ: Yes.  So, we have an ongoing discussion here.  We have launched, actually published report, named Internet, Information, and Democracy that was produced by members of the Brazil steering committee and also experts and we do not have a final commitment, but it's on discussion.

I'm working in one of the initiatives that is using new technologies to deal with those new threats for the content moderation.  So, from my perspective, I feel constitute that machine learning can play important role to help in the content moderation, not to give the final answer or to remove the content but to help the moderation team to find like harmful contents in this overloads data.

>> PETER KOCH: Thanks a lot.  And that leaves me more or less the final minute to summarize and end the session.  So, thanks to the panelists and the contributors.  Thanks also to the people back home in those various IGF structures, and for the attendees who engaged in the discussion by submitting questions or discussing in the chat.

I believe what we've seen is that there's a diverse and vivid community of NRI structures out there, tackling problems that are discussed at the global IGF level in this next week and during the year.  We've heard that people are continuing dialogues by setting up a series of meetings so in these pandemic times, obviously, national IGF structures or regional IGF structures are a very good way to perform intersessional work with the opportunity to breakdown the global problems into tangible or edible pieces at the local level and then report back as we just saw.

Again, I would like to thank all the contributors for their inspiring contributions and I think that gives all the other IGF structures and everybody else to learn and to continue the conversation across the different structures.

Thanks a lot, and with this, it's my pleasure to have had you here, and I'll close the session.  See you again soon.