This is now a legacy site and could be not up to date. Please move to the new IGF Website at https://www.intgovforum.org

You are here

IGF 2020 – Day 12 – WS304 Reaffirming human rights in company responses to crisis

The following are the outputs of the real-time captioning taken during the virtual Fifteenth Annual Meeting of the Internet Governance Forum (IGF), from 2 to 17 November 2020. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

 

    >> This session is being recorded and hosted under the IGF Code of Conduct and the UN Rules and Regulations. 

     Please be informed that the chat is for social purposes only.  But you can, of course, type your suggestions and remarks in the chat as well.  My colleague put all of those information already in the chat.

     And if you wish, you can take a snapshot, a print screen of the session and attach it into the final report later on.

     So please feel free to do so.  And I am making again the re-creation of people who are still joining.  So maybe let's wait maybe one or two more minutes but it is just a suggestion.

     >> And host, can they have all of their camera on so we can see them directly?

     >> Yes, I will type a message now.

     >> Okay.

     >> MODERATOR: I would like to kindly ask if the screenshot is a reporting requirement, or is it optional?

     >> It is optional.  You can do it, you don't need to.  It is just a possibility.

     >> MODERATOR: Okay.

     >> That is why I kindly inform you about it.  I just put request to everyone to turn on their cameras so they should see this request.  Yes, I think they have just started to turning on.  Their cameras.

     >> GBENGA SESAN: That is good.

     >> MODERATOR: Wonderful, so I think we are ready to go.

     >> That is the case, let's, please, let's begin.  I will just --

     >> MODERATOR: We can hear you, but I'm not sure who we are hearing?

     >> I just decided to use the opportunity to test my microphone, but if it is fine then --

     >> MODERATOR: On my side, I would say it's a little quiet.  If it is possible to increase the volume a little bit, then that would be great.

     >> Okay, is this any better?

     >> MODERATOR: Not for now.  At least not on my part, but I mean we can hear you.

     >> Because I'm not sure if I can improve anything apart from just speaking louder.  That is the one thing I can control, I guess.  But other than that, I'm afraid I exhausted the options.

     >> MODERATOR: That is fine.  You are audible.  And if anyone has any doubts, we can just ask. 

     And I would like to clarify for those joining, there is a recommendation to turn on your camera, if you wish.

     But I would say it is not a requirement.  We would also like to respect differential bandwidth, so it is completely okay if you choose not to turn your camera on.

     I think we have a critical mass.  In informal terms, 30 might be a critical mass.  So let's call it that and continue the conversation as people join. 

     First of all, thank you very much for joining, everybody.  Good morning, good afternoon, good evening depending on your time zone. 

     I hope you have had a very illuminating IGF.  This is, as you know, the last day.  I'm Jan Rydzak, I am the Company Engagement Lead and Research Analyst at Ranking Digital Rights. 

     In case you are not familiar with RDR, we are a research project that creates global standards for companies to essentially make good on their responsibility to protect and respect human rights. 

     We place a special emphasis on freedom of expression and information as well as privacy.  And the way we pursue this goal is by evaluating companies, evaluating and ranking them. 

     Some of the largest tech and telecommunications companies in the world, and we rank them on their policies, their practices, and the commitments that underlie them or, in many cases, should underlie them.

     And we capture the differences between those policies and practices between companies through the RDR Corporate Accountability Index, which to many of you will be known as the RDR Index, quite simply. 

     And the RDR Index is a kind of tool kit.  I don't use that word lightly, but it is a tool kit.  Just to give you a snapshot, the 2020 edition has 58 indicators that are broken down into 335 individual questions on policies and practices that range from transparency reporting to targeted advertising practices to how companies use and develop algorithms, and how to transparent they are about that. 

     And all of these questions fit into three large pillars.  And that is governance, freedom of expression and information and privacy. 

     And not coincidentally, that maps perfectly to the breakouts we have in today's session.  It's certainly not an accident. 

     So what are we focusing on today?  This workshop is essentially about looking at situations of crisis.  So political crisis, health emergencies like the one we are experiencing currently.

     Outright conflict.  Network shutdowns, overall situations where massive violations of human rights are either taking place or have the enormous potential to take place. 

     The leading question for this workshop is what are the shortcomings of company accountability in times of crisis?  And what can be done, what can we do as a human rights community to make companies more accountable in the area?  What are the main points of focus that we should pursue?

     And with that, I would like to introduce our speakers.  So starting with Isabel Ebert, who is an expert at the Office of the UN High Commissioner for Human Rights.  She's working with the B-Tech Project on issues that largely revolve around governance and how company's governance should follow the UN guiding principles in particular.  So she is doing really exciting work on that front, and I'm sure we will hear a lot about that.

     We also have with us Dorota Glowacka, who is a lawyer with the Panoptykon Foundation.  Panoptykon, as you may know, is a Polish NGO.  They do work in privacy, but I think more recently it's also been a sort of mover on the broader universe of human rights.  And I think that's kind of what Dorota would like to focus on. 

     And we have Mr. Gbenga Sesan, who is the Executive Director of the Paradigm Initiative.  The Paradigm Initiative, as many of you know, is an organization that works to deepen digital inclusion and promote digital rights in Africa.  And, if I'm not mistaken, Gbenga, you are still a fellow at the Stanford Digital Civil Society Lab, is that correct?

     >> GBENGA SESAN: That's correct.  No thanks to COVID.

     >> MODERATOR: Exactly.  Well, flexible work models are a hallmark of the COVID era. 

     So as part of this already overly lengthy intro, I would like to remind everyone that this session is not just about COVID.  In fact, I would say it is not even just about global crises.  What we often think of as localized emergencies can completely consume the lives of people who suffer its impact the most. 

     And obviously the pandemic has brought to the fore a lot of issues around unbridled data collection and its impact and its implications.  And I think all of us agree that it has kind of launched us into a new reality.

     And as part of that reality, the practices that have become commonplace during the pandemic risk being normalized, carried over to other context, remixed in ways that may have potentially harmful effects.

     But here in this session, we're kind of looking to address a longer standing problem, which is that it is hard or it is sometimes downright impossible to kind of forecast how a company will respond in a crisis situation in general. 

     So I would encourage everyone to sort of look past the pandemic, look beyond the pandemic and recognize that crises simply do not always impact us all the same way, and recognize crises that do not directly impact us all.

     Okay.  So let me tell you about the structure of this session.  We will begin with interventions by the three speakers, that is about five to seven minutes, with some flexibility there. 

     So, of course, we have -- we are fairly flexible.  And then we will move on to a discussion that will end around I would say 55 past the hour, and we'll see a minute or two before or after, if there is some flexibility. 

     And then after that, technology willing, we will break out into breakout groups.  Obviously, that may be a challenge, but we will keep our fingers crossed and hope for the best.  Those breakout groups will focus on governance, freedom of expression, and privacy, as I previously hinted. 

     And the presentations will provide grounding for those breakouts.  They breakouts will largely be randomly assigned and moderated by -- each moderated by a member of the RDR team. 

     And I would like to take this opportunity to introduce my colleagues Veszna Wennenauer, and Efef Abrougui, who will be moderating the other two sessions aside from myself. 

     And then we will finally come back for the final discussion and wrap the whole session up.  On a logistical note, please feel free to use the Q&A for substantive questions.

     Okay, so I would like to start with Isabel.  So Isabel, the B-Tech Project is really I think leading a lot of work in terms of providing guidance on how to apply the UNGPs to digital technology.  What's interesting I think about B-Tech and your approach is that you are focusing on some of the core areas, some of the areas that are really core to company governance.  So business models, human rights, due diligence, remedy. 

     I'm wondering if can tell us in general how good governance should come into play, especially in times of these major calamities.  Over to you.

     >> ISABEL EBERT: Thanks, Jan, very much for the invitation and also for the opportunity to talk more about our work. 

     So I think, of course, you said the panel is not only about the pandemic; but, of course, we have seen a lot of company behavior responding to the pandemic that is in a way a crisis, but also shows how companies are preparing for other crises.

     We have seen that in particular the pandemic obviously demonstrated sort of the key challenges that we have in upholding human rights in particular in this very narrow interlinkage of the State and the businesses alike, where actually the States were really dependent on the cooperation of companies in order to roll out certain technical solutions for managing the pandemic.

     And these challenges on the business nexus are actually already addressed in the UN guiding principles on business and human rights.  They are by their very nature applying to all sectors.  So they are not tech specific yet, but that is what we are actually taking on with B-Tech. 

     We are breaking down UN guiding principles on business and human rights through the technology sector.  And that also means that all technology companies have the responsibility to respect human rights at all times and across all of their operations.

     What the UNGP also recognized, and I think that's important for the crisis moment, is that the State has a strong role in supporting companies and carrying out that responsibility to respect human rights.  And there's specific language also on the necessity of the State to support companies in crisis situations or also conflict situations where there is a heightened risk of human rights abuses.  And that is certainly something that we see in the setting of the pandemic. 

     And what we are going to do with B-Tech is that we are producing more dedicated guidance on how technology companies can protect human rights going forward.  And we have already put out a range of foundational papers that you can find on our website. 

     These are really a conversation starter.  And based on these papers, we want to really follow up and coordinate with all stakeholders in that setting to make the guidance as concrete as possible.

     To your question on good governance, I think I really want to highlight three key points today.

     On the one hand, transparent governance and accountability structures.  On the other hand, really the necessity even though companies might be confronted with very, very much public pressure and also time pressure, there is the necessity to carry out human rights due diligence as best as possible. 

     And then thirdly, really emphasizing also the role of stakeholder engagement in that.  In the government accountability part, when we see that States are contracting with technology companies or seeking support from these companies, we really need to ensure that the governance between those parties is clear and it is really based on responsible data governance, responsible data use.  That not only applies to the sort of big tech players, but we have also seen that there are many mid-sized and smaller players that have stepped in during the pandemic, but also are stepping in other crisis moments.  And often the public attention focuses on these big tech players, but also the smaller ones are actually those that can have a tremendous impact on human rights.  And I think we have seen that in the past with some of the sort of niche surveillance companies really playing a critical role here.

     So B-Tech is really set out in order to make sure that we have robust and transparent ownership structures when it comes to digital infrastructure, but also when it comes to digital sort of use of data, how we are making sure that rights holders can have a say, can contest decision-making in case of doubt? 

     And we have also seen some interesting cases, for example, in the Netherlands on that.  And when it comes to human rights due diligence, my second point, we have seen that those businesses that had already in place robust structures in order to respond to crisis moments have actually been able to manage the crisis moments better.

     Of course, when you already have your human rights management structures in place and then the crisis hits, it is easier for you to make sure that the communication across the company is sort of more or less clear and coordinated, and that you can respond accordingly in multi-disciplinary teams and really make sure that you get the information from the grassroots level, but that you are also able to scale it up to the top of the company. 

     And that is something, for example, in the field of government request for user data we see that a lot where actually those companies that have been experiencing these crisis moment again and again have now adopted more robust managerial structures and can respond much quicker. 

     I'm happy to talk across the panel also more about some ideas for stakeholder engagement.

     Our work really departs from a thorough necessity to identify, address, and mitigate adverse human rights impacts stemming from technology companies.  And that also is very critical and a very powerful state of business nexus.

     >> MODERATOR: Thank you.  I think there is a lot of questions stemming from that.  We can pursue them in the Q&A. 

     And I think, you know, especially it is especially interesting to me what you are saying about the importance of pre-existing structures, right.

     This, you know, the preparedness and readiness is obviously an important part of resilience.  And I think a lot of the time we do see the companies across the board, both the technology companies in general and telecommunications companies that we sort of focus on as an organization, that a lot of these processes that they adopt are kind of ad hoc.

     And I kind of wonder about the role also of sort of targeted human rights due diligence that is focused on the crisis moment in that respect.

     There have been some initiatives in that direction, but I think they are still kind of tentative, and we are still at a very early stage of this conversation.

     So I would like to bring in Dorota, actually.  If your microphone works.  I'm assuming it's okay.

     But, Dorota, the Panoptykon Foundation, you know, I think for those of us who are familiar with it, it has provided a wealth of commentary on threats to privacy actually, I think that is the primary area on which the Panoptykon Foundation is known. 

     But there is also, again, an emerging strand of analysis that I think you're leading in the organization around the challenges of things like content moderation in an environment that is disrupted. 

     So especially if you could zero in kind of on the role of algorithm decision making and the role of remedy or appeals, which I think are two topics that you focus on in this disrupted environment.  If you could give us a sense of that, I think that would be very important.

     >> DOROTA GLOWACKA: Sure, thank you very much for the invitation. 

     And yes, for -- I would like to briefly discuss the implications of the pandemic for freedom of expression by focusing on how the current crisis has affected the content moderation practices of big internet platforms as I believe this is an area that to a great extent determines what we can and can't see and say online.

     So as you may have heard, the big tech companies once the pandemic started, they introduced some changes to their content moderation policies.  And basically what happened was that they literally sent the human moderators home and shifted some part of the work to the algorithms. 

     So obviously the algorithms have been used before for content moderation purposes, but during the pandemic crisis the platforms have admitted to use them even more for those purposes. 

     And that was not the only change in their policies as regards content moderation.

     What is important is also that they kind of limited the appeals procedure, procedures that were available on those platforms.  And just to make it clear, at least in my opinion, these procedures have never been or have been far from perfect, I will put it that way.  But right now, they have become even more limited. 

     For example, Facebook has temporarily suspended reinstating content after a user that would like to contest a decision to take down content files an appeal. Facebook has temporarily suspended reinstating such a content.

     What was the effect of all this?  As you know probably, there are studies that suggest that basically algorithms that are used for content moderation are not 100% effective.  In fact, they often carry a risk of over removals on the one hand, and on the other hand they often fail to identify truly illegal material.

     So in general one may say that like algorithms are prone to errors in this respect.  And there are like certain mechanisms that can be introduced in order to mitigate the risks that are associated with using algorithms or in general with just using arbitrary powers of platforms to moderate content.

     And these are the appeals process, some transparency mechanisms that can be introduced.  But as I said before, those have never been perfect.  And during the pandemic, I would say that they even have deteriorated. 

     So the combination of the factors have led to a quite disturbing effect that is demonstrated in the data provided themselves by the platforms in their transparency reports.

     If you look at the data concerning content removals provided by Facebook or by YouTube, you can see that during the pandemic there has been a massive increase of removals after the new content moderation policies have been introduced.

     Basically the amount of potentially harmful content removed during the pandemics doubled comparing to a similar period from before the current crisis.

     What are the regulatory implications or conclusions that can be drawn from that effect?  Those conclusions are not ground-breaking.  These are things that many human rights advocates have known already and have been advocating for quite a long time.

     So in order to mitigate the negative effects of using algorithms to moderate content, three things are basically needed.  One, a proper transparency mechanism including transparency on the role and functioning of the automated systems used for content moderation.  So that's like one thing, more transparency. 

     The second thing are effective appeals processes within the platforms, so-called due process. 

     And the third thing is external independent oversight of final platforms decisions. 

     So these are the three elements I would say that are needed, and I'm not going to go into details what they should mean because those details can be found in many soft law documents already developed by many human rights bodies. 

     And also there is quite a lot of recommendations developed by academics/NGOs in this respect.  Probably the most prominent example are the so-called Santa Clara principles that concern, among others, the details or the recommendations concerning the proper appeals process that should be implemented on the platforms. 

     And also we at Panoptykon, we also try to contribute to the efforts maybe not in the most obvious way that you may think of, because last year we started litigation trying to question an arbitrary removal done by Facebook. 

     Basically what happened was that Facebook blocked without any explanation a page of another Polish NGO.  And we are trying to convince a Polish court so far anyway, or at this stage, to basically consider this arbitrary nontransparent removal as unlawful.

     Again, I'm not going to go into details.  If you are interested in the lawsuit and how it goes, there is a -- we run a web page dedicated to the case where you can follow its progress.

     But what I would like to stress, one last thing and probably a key takeaway.  So the remedies for the situation that I kind of presented, they have been already kind of discovered.  There is like not much new you can say about it. 

     But what became transparent out of this experiment that happened during the pandemic is that -- well, let me put it the other way.  You could get an impression over the last years that the due process in the platforms have progressed. 

     The platforms have introduced certain safeguards, additional safeguards such as, for example, recently established Facebook oversight board.  And other elements that, well, probably have improved the situation. 

     But what the current crisis have shown is that this progress is to a great extent illusionary.  Illusionary in the sense that progress based on the self-regulation can be basically withdrawn any time and the platforms can basically change their policies overnight. 

     And, therefore, I believe that this is ever important evidence that, well, basically should convince us that we are really at the stage where we really need a binding regulation instead.  Thanks.

     >> MODERATOR: Thank you so much.  Sorry for the few seconds’ break.  I was looking for the chat -- for the unmute button.  Thank you so much.

     I think that is interesting, and you touched on multiple very interesting points including the role of independent oversight.

     I'm very glad that you mentioned the oversight board also as a sort of counter example to what independent oversight may be because there is a limit to its independence. 

     A lot of us in the session I think have worked on this topic.  And, of course, there is a sort of independent element in it, but it is still sort of dictated by the platforms' rules.  So it would be interesting to see how it evolves and what kind of initiatives it informs.

     And I think what you said about appeals is also interesting specifically in the case of Facebook because one of the -- if you look, sort of do a deeper dive into their most recent transparency report, one thing that jumps out is that the appeals -- in normal circumstances, the appeals that are restored after a user requests or appeals a decision, normally dwarfs the number of content, pieces of content that are restored without an appeal. 

     You can see how important that mechanism is and the absence of that mechanism in the course of the pandemic is impactful and important. 

     And I think this also underscores the importance of being prepared.  The pandemic in a way was an unexpected event, but then it became an expected event.  And I think there was a lot of preparation that could have been made that perhaps was not.  So thank you very much for that again. 

     And moving to Gbenga.  One the problems that I think keeps human rights advocates up at night is that the pandemic will be the kind of perfect excuse for excessive government demands, right?  Demands for user information, demands to block accounts sort of subsumed sort of under the umbrella of emergency procedures. 

     And that I think ties into kind of broader questions about how companies react to turmoil in ways that may contribute to the long-term corrosion of human rights.  And Gbenga, I know that the Paradigm Initiative has done work on topics -- (no audio).

     >> GBENGA SESAN: Is it Jan, it is not me.

     >> VESZNA WESSENAUER: I actually know the question. 

     Jan wanted to ask about the work you have done on topics like how companies react to excessive demands in your countries of focus.  And what lesson can those precedents teach us?

     >> GBENGA SESAN: For a second there, I thought my internet was frozen and there could have been many reasons for that.  But it is not mine.

     Okay. So I think it's important to -- I mean we will try not to talk about this crisis during the crisis, even though it is difficult. 

     But I think the fundamental thing to note is that companies have a major, major role to play in what becomes the new normal.  What companies allow or even, you know, tick up during this pandemic and during any crisis becomes the new normal for a different set of actors.  Especially the actors that have an agenda.

     I will give you very simple examples.  There are countries that shut down the internet during examinations.  There are countries that are automatically known that they will shut down the internet during any element of crisis.  There are countries that will shut down during elections. 

     Now, so what happens is that when there is not an unexpected crisis like COVID-19, you know, and businesses do not ask critical questions when demands are made of them, those countries with an agenda will take what they are able to get and make it a new normal. 

     A very simple example is data governance during this period.  Data governance during this period is a bit flexible for many companies because almost everyone -- I mean there is a climate of fear -- maybe now that we're hearing about vaccines 95%, 92%, maybe things are somewhat calming down, but the fear is still there.  Like let's like survive 2020 and then we can have other conversations about privacy and things like that.

     But it is important for us to have the long-term view.  The long-term view is that if we allow data collection that is opposite, that is not guided by certain principles during this period, trust me, there are countries that will refer to this period and say but in 2020, Germany collected data this way.  But in 2020, France collected data this way. 

     And these are the countries that we work in.  So in our 2019 Digital Rights in Africa report, which focuses on 13 countries, you will see countries that talk about shutting down the internet during examinations. 

     So, for example, I mean everyone knows that Ethiopia learns that bad habit from Algeria.  We don't know who learned from each other between Iran and Algeria. But now Ethiopia learned that lesson and they've continued to practice, even when everyone had hopes that Ethiopia was going to change its skin we could tell because the laws guiding that don't change. 

     And that takes me to my second point.  Except the laws changed to become more friendly or we disallow new laws, I can't tell you how many legal proposals we've tried to fight back on across many countries in 2020 alone.

     There are so many countries that are trying to get in emergency laws, and those emergency laws, they have provisions that you would look at, they have nothing to do with COVID-19 and everything to do with clamp-down opportunities by those governments.

     One of those bills, by the way, that we reviewed was a two-page document.  A two-page document with nothing but COVID-19 but everything about insult national security, national morality and the powers of the government to shut down the internet and things like that.  Of course you can tell that is based on a political agenda. 

     Where the companies come in is that companies -- and I keep saying this and I have been saying this for about two or three years now -- companies need to be more activists, you know.  They need to become -- and I say this because I thought yes, companies are not like nonprofits, they can't be activists and things like that.  But there will be no bottom line for you if we allow things to move too far. 

     And the reason I say this is in many countries where global platforms operate including tech companies in Africa, what governments use is not even the law.  It's a simple phone call. And I said this yesterday.  There are countries we know that we have on record that a phone call from the President's or the Prime Minister's office or from a Ministry as going to a telco and they shut down the internet without any data trail.

     Even when there is the law of the land saying that you have to do it and you know that it is not the right thing to do, the question still -- there are three things that we agree in terms of norms that should be considered before actions are taken.

     The first is in terms of legality.  Is it legal?  Unfortunately, in many cases in many African countries, we've seen the laws, many laws actually allow shutdowns.  Many laws actually allow bad practices. 

     But Is it proportional?  Is it proportional?  In many cases, proportionality and necessity are the two other elements that can allow businesses to ask questions.  You know, we've had examples of not only just in the tech sector, in many other sectors.  You know, a recent example that I gave of the banking sector in Nigeria.  You know, the bank regulator, the Central Bank of Nigeria, was trying to, you know, basically clamp down on protesters by freezing their bank accounts. 

     And then the bank regulators said to banks, we have these 20 people we suspect that they have ties with terrorism, block their accounts.  There was no document from the courts.  And some of the banks went on to block these accounts immediately.

     And then about 18 days later, the banking regulator produces the document from the courts to say, by the way, for that illegal action you take, it is now legal because we have the document.  So it is important to think legality and proportionality and necessity.

     Is it proportionate?  It has to be.  And it has to be necessary.  I mean if -- you know, many of the actions that we documented in our 2019 Digitalization in Africa report are good as using a grenade to kill a mosquito, you know, when you take a look at the actions that are taken by governments.

     So someone says something online, someone asks a question and said why is the governor of my state spending more money on burying dead people than on education?  And this person was fired and was taken to court.  How on earth does someone ask that question and that is what you think is necessary to do?

     And speaking of companies pushing back, one of the things that we have had that we've talked a lot about here and even in the in the conversations is transparency reports.  A lot more people -- and, of course, this is not new to you, Jan, and colleagues -- these companies -- many of these companies under pressure from RDR and Co have now broken down many of the components that they used to lump together. 

     But guess what? There was not one African telecom or tech company that produced a transparency report until recently.  And I can tell you why.  The reason is because many of them know or believe that if they did that, they could lose their licenses.  And this is why it is also in their own interest to make sure that they push back and do not get, you know, clogged up, do not get clammed down in an atmosphere in a climate of fear. 

     Good news, NTN which has received the biggest fine in the telecom history on the continent because of what happened to them in Nigeria has now released a transparency report.  And guess what?  Their licenses were not withdrawn. 

     So all of their assumption -- and we have been asking for transparency reports since 2000 and -- was it '11 or '12.  The first conversation, it wasn't even called a transparency report.  It was just a simple question.

     So in 2010, a telecom company, name withheld, got contacted by the presidency in Nigeria and they released text messages by an opposition politician.  And we started asking questions in 2011.  To date, it is now nine years, one company.  Which is good news, you know, we don't always release press statements to commend a company for doing something, but we're deliberately going to do that now, we're going to call out NTN for doing the right thing finally.

     Better late than never.  Of course, we will still ask them to break down some of the elements.  I mean it is interesting that NTN is not taking credit for being the last company to shut down the internet in Sudan and being the first person to restore, you know.

     Why on earth does anybody have to take credit for such a thing?  They should not shut down in the first place, but that's fine.  It is important not only transparency reports but provide enough information for everybody to understand what is going on.  Because at the end of the day the one element that these companies thrive on is trust.  There is no digital gig data economy without trust.

     Many times that trust is breached.  What happens is the companies begin to die because people begin to understand that I can't trust them with my data, I could get into trouble with my government for using this platform.  And by the way, this is one of the things that we are very, very, very, very excited to look at in our 2020 Digital Rights in Africa report which will be released in April next year. 

     For the 2019 report, we put together a 19-minute short film to demonstrate exactly what's going on in these 13 countries we looked at.  And we intend to do the same thing so I will be inviting you guys for a movie premier for the 2020 report. 

     But I think it is very important that companies do understand that the biggest loser in the erosion of trust is their bottom line.

     >> MODERATOR: Thank you so much, Gbenga.  There is so much to unpack there. 

     As you know, NTN is one of the companies that we rank as RDR.  You also touched on a point that a lot of us have seen in the past, which is that transparency reports also don't get specific enough, right.  In many cases there is an arbitrary element to how companies disclose things in general and especially during the pandemic.  Sometimes we see arbitrary sort of collections of data being published.  In some cases, that I will not specifically, but in some cases, shutdowns are sort of bunched together with other kind of restrictions such as blocking individual numbers in ways that make real transparency kind of impossible or next to impossible.

     So I would -- since we don't have that much time, and I think we have a lot of questions here, I think we have a few questions in the chat.  But I believe we have a participant who has raised his hand. 

     That is His Excellency Dr. Haissam Bou Said. The floor is yours.

     >> HAISSAM BOU SAID: Yes.  Hello, everybody.  I'm going to make it short to give time for everybody to deliver their speech.

     Since we are talking about the international crisis, pandemic, the COVID-19 pandemic is one of this type of international crisis.  But also and I heard some of the colleagues spoke about a lot of topic, and we agree on some. 

     Especially Mr. Sesan when he spoke on the Nigerian aspect, it is to call his attention that we monitored the presidential elections and also we got a lot of violations even from the media and internet side.

     The issue is that on the Middle East where it is our duty to follow all of the violation of human rights, as Middle East Commissioner of the International Human Rights Council, we believe that we witnessed also a lot of fake news carried by a lot of social medias, televisions, newspapers and still being carried on until now. 

     And such, the matter of fact is that such news, fake news created a lot of chaos in the understanding and approach of the main crisis.  And also it mislead a lot of States to take actions on some violations as it is not -- it was not, you know, well pointed on these fake news media. 

     The problem is not with the laws, the laws are there.  But the problem is with the implementation of the laws carried out by some of the companies and also States who are not -- on various purposes are not willing and are not showing intentions to carry some legal actions against some social media.

     We are not with the shutting down that is being done by many social medias, but also we are not by letting them, you know, continuing with all of these fake media.  And as I said, that the problem is also with the State legal departments that they are not also taking actions.

     Therefore, we believe that the Federation on Communication, which is one of the UN bodies, should take full responsibility on that and jump in in all of these kind of crisis to put an end to all of these violations.

     Because we cannot continue like that with misleading the social and public opinions.  And also, we cannot continue like that because it is jeopardizing our work at the Human Rights Council and other international bodies.

     Therefore, we believe that it is also needed to focus on how to trigger these laws and how to make States and companies together to stick to these laws.  Thank you.

     >> MODERATOR: Thank you, Dr. Bou Said.  That is informative and cuts across what a lot of the three of our speakers said.  Is also offers a different perspective than what we have heard so far.

     I would also like to get to a couple of the questions in the chat before we split up into the breakout groups.

     And the first one that I see here is from the Danish Institute for Human Rights.  And the question to Dorota specifically. 

     Do your findings suggest that the increase in content removals during COVID is because of more removals?  Or is it because the health crisis has prompted companies to identify more content that can actually be harmful?  Do you have a sense of where that stands?  What is -- let's go ahead and give you the floor.

     >> DOROTA GLOWACKA: Sure.  Well, the truth is I cannot really exclude a possibility that it is the number of -- that the number of harmful or illegal content has increased during the pandemic and that this is the factor that has led to the effect that I described.

     And the reason why I can't exclude this possibility is simply because platforms do not provide enough data to accurately assess it, at least from the perspective of an external researcher or an NGO monitoring practices of the platforms. 

     Having said that, though, I would stick to my hypothesis that it is actually the use of algorithms that have -- it is the use of algorithm that has contributed vastly to the increase of removals for a couple of reasons.

     First of all, the increase of removed materials have been so massive, that is the one thing.  And then if you link it to other data that you can see in the transparency report such as, for example, in the YouTube transparency report you can see that the number or the percent of content that was used -- sorry, 90% of content that was remove was actually flagged and identified by algorithms. 

     So if you kind of link the -- like those numbers together, I believe that this supports the hypothesis that the algorithms played a very crucial role in the process of removing content. 

     And then another argument is that it really goes in line with the finding of the previous studies that I referred to that basically suggests that using algorithms increases the risks of over removals. 

     And finally, in the recent months the companies have started to withdraw from these new policies.  They either made it official or we know it from some informal conversations with the content moderators.  The companies came to a conclusion that at least some of the work should be shifted back to human moderators.  And again, they have been kind of more involved in the process of at least, well, conducting an oversight of the algorithm decision-making processes.

     So again, these are just, you know, like some -- these are the arguments that kind of support my hypothesis, I believe.

     But I guess the core problem here is that as long as we do not have like solid data that would prejudge it one way or another, we can still just guess and make those assumptions. 

     While getting this data would be like actually really helpful to see what are the core reasons for such a massive increase of content removals.

     >> MODERATOR: Thank you so much, Dorota.  And I think this obviously, as any of these questions, opens up a whole lot of other questions, it is a whole sort of, you know, Russian doll situation in which we can expand into an infinite number of additional questions. 

     I think there is a lot to say here about the role of algorithms in individual parts of content moderation in general.  One of the areas that RDR is focusing on is the role of algorithm versus moderation in both appeals and takedown of content.  There are problems due to the clustering of different kinds of enforcement, you know, takedowns or shutdowns of accounts.

     But I think we also need to -- in the long-term we also need to look at the solutions, right.  So what can we do about that?  What can we do about algorithmic systems that are simply not accountable?  Is it audits?  Is it publishing detailed policies on how companies use algorithms and for what? 

     There is, I think, multiple ways to break it down and I think all of them are worth pursuing in separate panels if not entire workshops.

     So given that we are running relatively short on time and we have a tight schedule, I would propose that we will take the rest of the questions from the chat after the breakouts to make this as sort of inherently practical as possible. 

     And we will try to return to those once we are done with the breakouts.  So if my colleagues from IGF could kindly split us up into three rooms, if that is possible, I think we are ready. 

     (Breakout Room 3)

     >> AFEF ABROUGUI: Hi, everyone.  I'm not sure, I think this is it. 

     >> PONCELET ILELEJI: Hi, everyone.

     >> AFEF ABROUGUI: I mean I'll just start by introducing myself.  My name is Afef Abrougui, I'm a Research Analyst at Ranking Digital Rights.

     Maybe since we are a small group, we can do a quick round of introductions and then we can start. 

     PONCELET ILELEJI: Hi, I'm Poncelet Ileleji.  I'm CEO of Jokkolabs Banjul. I'm the national resource person for our NRI in The Gambia.  Thank you.

     >> AFEF ABROUGUI:  Thank you.  Someone else would like to go?

     >> KAMAYANI BALI MAHABAL: Hi, I'm Kamayani, I'm from India.  I am working on the issues of gender, health and human rights, and I'm involved in movements. 

     And my suggestion would be since we have very less time, can we just start and when people intervene, they can introduce themselves.

     >> AFEF ABROUGUI:  Yeah, that's also good.  So the purpose of this breakout group, it is to discuss two points.  The first point is to sort of identify and discuss trends in crisis response from telcos and technology companies and whether those trends align or not with international human rights standards. 

     And the second point is to identify good practices that companies should adopt and follow in these -- in any -- in these types of crises. 

     And once more, just to remind that we are not only dealing with global crisis like the pandemic, the COVID-19 pandemic, but we are also taking into account other types of crises including conflicts and disasters and humanitarian crisis.

     So I'm not sure if anyone else would like to start.  Maybe to add anything that was raised, but also to discuss other trends than were discussed.

     >> PONCELET ILELEJI: I will start.  First, I will say looking at what they have done in terms of the pandemics, looking at it from the perspective of the Global South, and look at it from my end in The Gambia. 

     We had to be -- telcos, what they did, especially for all the university students, they were able to make them have data to be able to access Google classrooms.  So that was all -- they give them given that access, just offers data access in the classroom.  That was very good.

     But another thing I notice about most companies, I did myself or my staff was being able to -- and I think that is a good global best practice in terms of pandemics -- being able to provide a path with wired and data and good connectivity at home so that everybody would be on the same level.  Even my drivers had connectivity.  My janitor had connectivity, you know. 

     So those were things while I look at it in the normal context of things that normally you don't get an allowance for that as part of the paycheck.  I thought that was a good practice. 

     A lot of companies I know and in different parts of Africa, you know, even I was surprised a friend in Ethiopia explained for me they didn't have a lot of route projects that they work just above us.  So that is something I will -- I think as a group improve our standards, you know, and our recommend apart from what telcos have done in terms of trying to minimize.  Thank you.

     >> AFEF ILELEJI: Thank you.  Also just a reminder that here we are talking to good practices in trends and in relation to freedom of expression.  Good access and good connectivity is one of -- of course, it falls under this breakout group discussion.

     And particularly also to avoid shutdowns during like a global pandemic where people need access to information.  It could be also a very bad practice in this kind of crisis.  Anyone else?

     >> ISABEL EBERT: Maybe to jump in. 

     What we can obviously see is that also the discourse becomes captured by people that do not necessarily support a human rights-based approach to these topics.

     And this is obviously a really growing development because by that we are diluting the discourse and we are -- we are threatened by the idea that obviously also the human rights language can be used against some of the people that we actually need to protect on the ground, right?

     So I think really what I also was aiming at emphasizing in my contribution, it is not only the technology companies that have to really work on the human rights-based approach, but it is also the States. 

     And that, of course, can sometimes lead to adverse situations wherein the end technology companies see themselves confronted with demands from the government and might not be in line with the human rights-based approach to governing technology, right? 

     Just a little bit what Gbenga has been touching upon, but what we see in many jurisdictions and in particular in the transparency reports that is very obvious.

     If you look at, for example, data from 2014 around the protests that were happening in Hong Kong and so forth, you see that there is immense pressure on technology companies to cooperate with States. 

     So I would really recommend people to have a look at these transparency reports.  They're very, very useful in some ways with the caveat that they are not standardized and the numbers of one company don't necessarily compare to the numbers of other companies.

     >> AFEF ABROUGUI: That is a good point.  Also maybe for a good practice here is sort of have companies be more involved in multi-stakeholder initiatives not only in these times of crisis but also before and during these crises. 

     Do you have any other -- I'm not sure if you have any other recommendation about this point that you raised specifically for companies?

     >> ISABEL EBERT: Yes.  So from my experience what companies have been doing now through certain processes, they have really coordinated, let's say.  Even though I said earlier we have a lot of attention on the big players, but they also have the capacity to coordinate around us and roll out big processes. 

     So basically there is a critical mass of big tech companies that have come together and talked about how to address certain issues for example, network shutdown demands or government requests for user data.  And they have come up with a sort of joint idea how to approach this.  Some of this is reflected in the principle of the Global Network Initiative.

     But others is sort of more informal way of exchanging about how to react to certain situations.  And by that, obviously, you create the sort of leverage that also the UN Guiding Principles on Business and Human Rights ask for like work together sometimes even with your competitors on the specific area of holding human rights protection when you're confronted with challenging situations in conflict settings or crisis moments. 

     So I think here we have seen some progress, and also as Gbenga mentioned, obviously there are more and more companies joining this critical mass and entering into these really significantly important discussions.

     >> KAMAYANI BALI MAHABAL: I would like to add from India, I think we have to look at the relationship between the State and companies as well.

     India, we are a large democracy, but we are being governed by the corporates.  And especially during the elections, where does the money come from?  The money, again, is coming from the big tech companies and big giant corporations.

     So this sort of -- you know, there is no transparency, as we have talked about, having transparency reports, that has not been happening.

     And when the tech companies are giving money for elections and there is sort of, I would say, an unholy nexus here where the human rights suffer.  I think we need to think how can we actually challenge this unholy nexus?

     >> AFEF ABROUGUI: That is also a very good point.  But do you have here sort of a good practice for like suggestion for a good practice or recommendation for companies to try to adopt?  I mean that is fine, but just raising this point because it is an important point.

     >> THOBEKILE MATIMBE: Yeah, can I just jump in there just to say it's critical for balancing out the power dimensions that come between the State and telecom companies. 

     Basically what is critical is, you know, for companies to consider, you know, deliberately to engage a wide range of stakeholders, as already been echoed, because I think that is kind of like the best yard stick when looking at what is best practice. 

     Because when there is that engagement, there is robust discussions around what a best practice and what is internationally acceptable in terms of respecting human rights issues.  It's a non-negotiable issue.  That should be well understood.

     And sometimes there's a tendency to kind of slacken from telecom companies when we're looking at their interactions with the State itself.  

     So you'd find that continuous engagement with other stakeholders on a constant basis, I mean stakeholders who are not the State who are continuously echoing what human rights issue, what best practice are, and what should be adhered to.  So that is very critical. 

     And also, I think, you know, basically I think maybe engaging at a level of, you know, even getting capacity from stakeholders where human rights issues are concerned.  Because we must admit, they might not be the best experts on human rights issues. 

     But to bring in that interface where stakeholders consistently engage and, you know, capacitate telecoms companies, so to speak, on critical human rights issues. This is very pertinent.  This is also why we would call and say, you know, come onboard when we have forums that provide -- that discuss these issues so that you also get that capacity.

     >> AFEF ABROUGUI: Thank you.  Anyone else? 

     And what about the point in relation to the use of algorithms now that most of the -- at least a big part of the content moderation work since the pandemic for some of the big platforms has been shifted to algorithms? 

     Maybe you can also address that point if there have been any particular trends or cases that you noticed in our own countries and regions.  And what kind of practices should companies adopt to be respectful of human rights when using these algorithms?

     >> ISABEL EBERT: I'm happy to say something about that. 

     The issues that many companies, in particular the social media platforms, think that they can manage content governance to a large share based on automated decision making and sort of scanning the posted contents by automated means.  So to a certain extent, sometimes there is not necessarily human oversight involved.

     And when it comes to hate speech, this is obviously problematic because hate speech is inherently context specific.  So if you just use sort of natural language processing techniques, you will miss out on the contextuality of the statement. 

     If you use certain trigger words, you miss out on actually a way in phrasing statements that doesn't use the trigger words but still says the same thing.  So we see that a lot when it comes to managing these contents.

     And I think turning this into a recommendation to the technology companies, this would mean that they really need to invest in more staff, actually having human oversight over these discussions.  And that also includes having sufficient resources for people that -- in non-English speaking environments. 

     So, for example, there is an issue that we don't have sufficient Arab language moderators, for example, there.  And again, there you see that some of the injustices from sort of a power perspective are replicated even in that very low-hanging managerial area where you could actually make an effort from a management point of view in scaling up these efforts.

     >> AFEF ABROUGUI: Yeah, it's a very important point to raise, particularly about hiring and training content moderators who speak, you know, multiple languages, specifically in certain countries. 

     I have had discussion with a researcher who covers Ethiopia regarding some of the crises that have been happening there with the political transition, with the hate speech and with the acts of violence. 

     And there is a problem of hate speech, but there is a lack of content moderators in the local languages in Ethiopia.  It is not just one language, it's several -- many languages in Ethiopia.  And that is a very big problem.  Anyone else would like to share?

     What about I mean the issue of so-called fake news or disinformation and misinformation? 

     We have seen recently several companies taking action to sort of downrank certain kind of content that is a threat to public health, or they consider as being a threat to public health.  And that could actually misinform people about what is happening with the pandemic, but also how they can protect themselves. 

     Are there any implication or could there be any implications for freedom of expression and information when companies take that kind of action?  Do we want companies taking more action on this type of content?  Anyone else have any thoughts?

     >> GBENGA SESAN: Sure.  So I was hoping to get a chance to respond to the comment about fake news by the I think Ambassador in the open panel.  You know, I hear a lot of people talk about fake news, hate speech as if the real culprit is social media or the platform that the content is regulated through. 

     I think everybody agrees disinformation and hateful speech are things that need to be tackled.  But what we can't afford to do is to solve a problem by creating another problem. 

     In an attempt to solve a problem on disinformation and fake news, hate speech, we have now introduced practices that are themselves illegal, unnecessary, and disproportionate.  That definitely isn't the way to go.

     So we had a lot of debates especially in Ethiopia and Cameroon Nigeria.  We have to put out a statement at some point and follow up with some of the processes. 

     Interesting that Nigeria seems to have made quite some progress with the what is called the in courts social media, antisocial media appeal which is basically to prohibit falsehood and misinformation on the internet. 

     And the bill hasn't been able to do that.  What it done is to give more power to a police force that is already known for brutality and illegality, provides for provides for possible internet shutdown, provides and gives more power without the oversight of the judiciary, by the way.

     What we have now been able to do is have a number of conversations and say that, you know, in the first place we do not know one country where control -- Another word that many of them use for it is regulation.  Regulation doesn't necessarily mean control.  Regulation could mean creating standards. 

     But in this place talking about social media recommendation, we don't know one place in the world where social media regulation has put an end to disinformation campaigns or dangerous speech on any platform.

     But what we are seeing is that when certain communities -- and we have seen this in distributed networks.  And when I say that I use that word loosely.

     I will give an example of what are groups?  So what is a group, for example, where all of the things that are happening on the internet happen for the community then comes together and says listen, this is a code of practice here and we will not share further any information that we consider to be false and that you cannot prove.  So I think that there are two things we can do.

     One is in terms of educating people a lot more so they can identify what is dangerous and what is false and not share it. 

     And secondly, to encourage the people to be able to identify common codes of information in conversations.  And many times what many government referred to as fake news or even hate speech are actually citizens speaking up about some of the malpractices that it is seeing.  That varies for the protests that happened in Nigeria. 

     You know, there is now -- they are saying that many of these participants, many of these activists were propagating false information because they said the military was and does release the tweet and said please use.

     And they the media said well, we were actually there, but we were invited by The Governors.

     And then a few days later said okay, we were there, and we shot, but they were blanks and we shot off.  So it is important for us to push back and not allow people to clamp down agenda.

     >> EFEF ABROUGUI: Thank you.  I'm going to have to interrupt you because we have three more minutes.  It is a good tactic to adopt, but do you have a recommendation for companies in this case?  A quick one.

     >> GBENGA SESAN: A quick recommendation for companies, like Isabel said earlier, the need to hire smarter.  Not necessarily hire more, hire smarter. 

     The people that understand the context, that can be good in nuances and be able to help you make the right decisions.  Machines can't get us out of this unfortunately.  We can solve unemployment problems in many areas by hiring the right people.

     >> AFEF ABROUGUI: That is a good recommendation.  Does anybody else have anything to add?  Because we have probably one or two minutes.  Okay. 

     Then I think we can -- yeah, we should be free to go back to the entire group.  And thank you very much for your contributions.  And I will summarize the points when we are back.  Thank you all.

     (Main room)

     >> MODERATOR: Hello again.  I think we are getting a trickle in of participants. 

     Welcome back.  I think we will try to run through some of the sort of like top-level conclusions from our breakouts real quick. 

     As people filter in, I will begin on our side.  Our breakout room discussed the potential of governance-related solutions, or governance-focused solutions. 

     And there were a lot of commentary especially on the role of due diligence, human rights due diligence.  And in particular within that sort of field, we talked a lot about the role and potential of mandatory human rights due diligence frameworks. 

     I'm not sure if we had a quorum or if we had consensus, but there was quite a lot of sentiment that there is a very clear role for a mandatory human rights due diligence.  And not necessarily on a country level but on a regional level.  And as many of us know, the European Union is sort of spearheading a lot of those efforts.

     And I think the second sort of top level highlight for me is that I tried to emphasize this at the beginning of this session, but it is important to realize that there are localized problems that also require a specific, you know, localized approach, right?

     What is a crisis in the forum of the U.S. elections, which is a priority for large tech platforms given that it unfolded in a priority market is not less important than elections happening elsewhere.  And the same sort of importance should be accorded to other markets, especially given that a lot of the big platforms have the resources to expand their focus there. 

     There is a lot more thoughts there, but I would like to leave the floor to the other two breakouts and collect a couple of more questions before we wrap up.  So that is it for me.  Over to you, Veszna.

     >> VESZNA WESSENHAUER: Thanks, Jan.

     So we were talking about the privacy aspect of today's session and managed to get through three set of questions only.

     One of the things we were talking about was how transparent are the companies about data collected for the track and trace COVID programs. 

     No one in the groups was really familiar with the processing practices of these apps.  But what we could clearly say is that companies in general are not clear about these things, so they are very non-transparent about data collection and data interference practices. 

     And it was brought up that this question also has a culture aspect probably like on how these applications are trusted across different cultures and who is the owner or like who is developing these applications.  Is it the government?  Or is it a private corporation?  And those have different perception in different countries. 

     And then we also talked very briefly about the sunset clauses for crisis data collection and processing and what best practices can be defined if there is no end in sight of the crisis.

     Again, what we said is that in general companies are silent about data retention periods, so they don't tell much about how long they going to retain data of users. 

     But as the best practice, we were talking about the opt-in and opt-out option or control in the hands of the users from data collection and data processing.

     And we also had a few minutes to talk about telecommunication companies’ transparency reports accounting for government for user information.  And we had an example from Brazil where government wanted to have excessive access to tech communication companies’ user information.  But if I understood well, the Supreme Court held this, and it was not implemented in the end.

     So and this was during the COVID crisis.  So it was for the purpose of handling and mitigating the risks of COVID and having access to user data.  But it has not successful.  Maybe we can elaborate on this if people are interested.

     >> MODERATOR: Thank you very much, Veszna.  And we will move over to Afef to give us the sense of the freedom of expression.

     >> AFEF ABROUGUI: So in the freedom of expression breakout group, we discussed first the issue of access, internet access and that as a good practice it is important for telecommunication companies to ensure access, good connectivity, and also to sort of try to at least push back against network shutdowns when they can.

     And the other issue or the other problem or trend we discussed is sort of State and company interactions either around requests for user data.  But also one example, for example, in India was that some companies from the telco sector but also from the big companies are not supporting of human rights and these sort of interactions can be threatening to human rights. 

     In these cases, the good practices that it was recommended that it is critical for companies to engage with a wide range of standings on the human rights.  It is also critical for companies to coordinate around these issues and try to roll up some processes.  Big tech companies have already been doing this around certain issues like the Global Network Initiative, for example, but it is also important for smaller companies to do that.

     The other issue or the other trend was content moderation either when it comes to the use of algorithms and how that can be detrimental to freedom of expression.  And in this case, it is the good practice that it is important for companies to sort of have human oversight over content moderation that involves algorithms, particularly for certain types of content such as hate speech.  Because here it is -- for hate speech, to decide which content to take down or not, context is very important.  And that can be -- algorithms are not -- they are not capable of understanding context. 

     The other recommendation or good practice is that it is very important to invest in more staff and human oversight.  Content moderators in different languages and content moderators that actually do understand the context.

     We discussed, for example, the example of content moderation in certain countries including Ethiopia where it is a very complicated political context out there and so many languages that are being spoken.  The big problem when it comes to hate speech, but companies are not doing enough.  I think that is it unless I forgot something, maybe someone else from my group can highlight it.

     >> MODERATOR: Thank you, Afef.  I'm impressed you are both much more thorough and comprehensive than I can ever be.  Thank you for being so precise with your comments. 

     Is there anyone from any of the groups who would like to expand on what was mentioned?  Isabel, go ahead.

     >> ISABEL EBERT: Yeah. I think one point that I highlighted and that is important is also that what the UNGP clearly singled out is both the government and the technology companies have to use the human rights-based approach. 

     Also when it comes to the context, perhaps the situation about the opinion what happened on the ground is not crystal clear.  But again here, pointing to my earlier comments, the governance and accountability has to be there.  Whether it is the government or whether it's the company, it needs to be clear what happened on the ground and how both of these actors or set of actors responded in these fields. 

     And that includes also the necessity of both actors to be transparent in their publications what exactly they have taken as steps to identify potential adverse impacts and how they reached out to rights holders.

     >> MODERATOR: That is a very important point.  And it kind of reminds me, too, of one of the few instances that I know of personally of a sort of like mandatory approach to disclosing human rights practices that at least I know of, some of you may be familiar with the mandatory sort of disclosure requirements that the U.S. required from companies operating in Myanmar a few years back.

     That was sunsetted.  That was process that it was interesting because companies were required to provide progressively detailed information about precisely that, what both the situation on the ground and what they as a company are doing to respond to an increasingly volatile, you know, environment and set of circumstances. 

     I think it is important to look back at some of the potential successes as far as requiring greater accountability and transparency like that and learn from them.  And also learn why they were discontinued at the same time.

     Okay.  We have three minutes left in the session.  I would like to leave that for any burning questions.  I will try to prevent myself from going into a monologue. 

     So please go ahead.  If anybody has a burning question you would like to raise, now is the time.  Or a point you would like to address.  Okay.  I will give a five warning.  Okay.

     Unless there is -- I think silence is consent in this case.  So if nobody else has any commentary, I would like to wrap up and say, you know, we didn't set out to change the world in a single IGF workshop, that is practically impossible, but I think this is a good seed. 

     You know, our goal was to sort of take an interactive approach.  Excuse me, Your Excellency, you have a hand raised.

     >> HAISSAM BOU SAID: It depends on the Memorandum of Understanding that these companies have with States. 

     If the memorandum is restricted to some part such as crime infos or state high security infos, then we might accept the non-transparency in this case that it should be a specific item that should not be revealed to the public.  Otherwise, everything should be transparency.

     >> MODERATOR: That's a very interesting point, and also connects with the idea that some interventions can be targeted, right, as far as the algorithmic systems are concerned. 

     Many times companies want to prioritize the most harmful content and perhaps that is where they should disclosing more on how these systems work and perhaps opening them up to audits. 

     So with a minute left, I would like to thank everybody for participating, especially our speakers.  But also my excellent colleagues from RDR.  All of you went above and beyond.  Thank you so much. 

     There will be a report coming from this workshop shortly; but at the same time, please don't hesitate to reach out to RDR or any speakers that participated in the session if any of this piques your interest. 

     I would like to add as a closing remark I think a crisis like this is something that happens rarely, you know, some would call it a black swan event.  A very rare crisis that has a strong impact on a lot of people. 

     But at the same time, I think it's important to remember that many times systems society, social systems come out of the crises stronger.  Not just with the same level of resilience, but actually more resilience.  So I would wish that.  And I think we can be more vigilant and more watchful now having almost gone over the hill in this crisis since now we know much more, I think than what we did a year ago about how human rights abuses can often fly under the radar. 

     So I wish us all more resilience in the coming year and strength since we have survived such a difficult time.  Again, stay safe and please enjoy the closing ceremonies.  Thank you very much, everyone.

 

Contact Information

United Nations
Secretariat of the Internet Governance Forum (IGF)

Villa Le Bocage
Palais des Nations,
CH-1211 Geneva 10
Switzerland

igf [at] un [dot] org
+41 (0) 229 173 411