IGF 2022 Day 1 Open Forum #64 Spotlight on AI-driven content governance in times of crisis

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> JULIA HAAS:  Okay.  We will start now.  Apologies for the five minutes delay.  Good afternoon.  Welcome to our session on AI and content governance that's hosted by the OSCE, the organisation for security and cooperation in Europe specifically by the Office of the Representative of Freedom of the Media which explains a bit of the regional experiences we're mainly having here today.  We will be discussing free speech safeguards for digital information spaces in terms of crisis, crisis in a broad sense including conflict COVID.  This session is hybrid meaning we have two speakers here with me in Addis and two additional ones joining online.  To my life we have the lead at access now.  We worked a lot together in our project and put a spotlight on AI and the impact of AI and the Freedom of Expression including on the policy manual that I also brought for you here, physical copies.  And that we will introduce a little bit later.

They're also doing phenomenal work on content governance and also in crisis which we'll hear about later.  To my right is the technology in woman's rights expert at SCP and online we have Matthias Kettemann who is the professor of law at University. and Tetiana Avdieieva who is human rights lawyer in Ukraine.  I'm also joined by my colleague who is co‑moderating online.

Before we dive into the discussions, I would like to hand over the floor to our director, who will kindly replace the representative who is currently on the road to the Minister of Council in Poland for his opening remarks.  Juergen, you have the floor.  We cannot hear you.  Can you hear us?  I'm not sure this works.

>> My video is also black.

>> JULIA HAAS:  We can hear you.  The camera is ‑‑ okay.  We are having technical issues with any online sessions.  Apologies for that.  Maybe we'll give it one more try.  Otherwise, I think well ‑‑ you can hear us?

>> DENIZ WAGNER:  We can hear you.  But we're having some technical difficulties with the video.  Apologies for that.  Can you hear us?

>> JULIA HAAS:  Yes, we can hear you.  We only have a black screen.  Maybe the technicians in the back can help with that.  The screen of the speaker is only black enabling the video.  In the meantime while they are trying to fix this ‑‑ sorry?  They say it's abort they say it's on your side.  Do you want to speak without the screen.

>> Since we have one hour for this event.  I have to apologize for this pop.  This technology is connecting us but not always perfectly.  But I think we need to start right away to not lose any time.  Let me wish you good afternoon from Vienna.  I'm very happy to be connected with you at least through this channel ladies and gentlemen, this year marks the 25th anniversary of the mandate of the Office.  It was 25 years ago only 1.7% of the global population was online.  Today that number has risen to more than 80% across the OSCE region.  This monumental expansion with all the benefits with the free flow of information as well as human rights concerns, such as cybercrime and spread of violence, has changed the way we seek, receive and impart information at all times.  But also during crisis.

Moreover, we've seen a monopolisation online with a handful of platforms being gate keepers.  They deploy machine learning and AI to govern online spaces.  AI is used to decide which content is taken down, what content is prioritized or to whom it is disseminated at what point in time

    These decisions that shape and arbitrate political and public discourse execute technology that is not necessarily designed for accuracy, diversity, or the public good.  This year more than ever it's undeniable is a direct impact on global peace, stability, and comprehensive security.

Over the last couple of years we have witnessed how the implication of hate speech online and impact whole societies during health crisis or during war, often in irreparable ways.  Governance has to put human rights at the center.  This at all times it becomes more essential in times of crisis.

We have to work together through multi‑stakeholder and multilateral efforts such as the one here at the IGF to make sure that the gate keepers to information and their business practices are in line with international human rights standards.  With power must come with responsibility.

Earlier this year, the Office of the Representative and Freedom of the Media launched spotlight policy manual which aims at ensuring that artificial intelligence is conducive to our digital rights.  In order to achieve this, the policy manual provides human rights‑centric recommendations and free speech safeguards for the use of AI in content governance, basic guidelines that are needed to preserve and foster the internet as a space for democratic participation and representation for media freedom and for security.

The recommendations build on transparency, accountability, inclusiveness, and public oversight.  Without those, we cannot have Freedom of Expression in the digital realm.  It is necessary to discuss the context of crisis.  A few weeks ago, the Office of the Representative held a workshop on content governance in crisis.  It can fuel and suppress accurate information.  It can empower communities and enable access to independent information.  This is relevant throughout the entire crisis ISOC and irrespective of the type of crisis, be it conflict, climate change, or COVID.

The workshop came out with a few key messages around preparation, due diligence safeguards, crisis protocols, and inclusive processes in ensuring vibrant and healthy information spaces.  In the next hour we will have an opportunity to discuss these points and validate them based on your input and feedback.

As a next step, we will Brent them to the OSCE participating states as policy guidance.  Thank you very much for joining us today in helping us to get one step closer to solid policies to enable and protect our human rights online at all times.  I'm very much looking forward to discussions and thank you very much.

>> JULIA HAAS:  Thank you very much, Juergen.  Apologies the video did not work.  The recommendations you mentioned we brought a few copies of the policy manual here and I will put it in the chat online for later.  As Juergen mentioned they're focused around transparency, accountability, public oversight.  They're really crucial for ensuring online Freedom of Expression at all times.  But over the course of the last month or even years we have seen that, in particular, when we are at times of social uprising or of tensions or conflict but also other crisis, for example, global pandemic or other health crisis or natural disasters or climate crisis.  It's even more important that we're able to access independent information that we can communicate freely and we can really use the internet, all of us as a space as a space.  The right of society to know and be informed ‑‑ it is important to discuss content governance in times of crisis.  Also I think at IGF and it's hosted in a country where millions of people are currently not able to access internet because it is being shut down.  We're looking forward to the conversation here.

As mentioned, a few weeks ago we hosted a workshop that looked at the recommendations that we published earlier this year on key principles on content governance.  Generally to see how they're applicable in the context of crisis, because it's about the availability of information, the accessibility of public interest content, and administration and flow of information.

During the workshop, we identified five key points that we want to discuss today.  And we really want to open up and also to all of you to receive input and feedback before we finalize this as policy guidance to the states.

Speaking about the state, I want to already to dive in and hand over to our first speaker who is online and very much hope that it's possible to enable the camera now with the technical support.  Matthias Kettemann who was Rapporteur of the workshop and moderating the conversation during that workshop.  Matthias, could you talk about the positive obligation to protect but also to fulfill and ensure Freedom of Expression online including in times of crisis.  Can you enable the speaker's microphone and camera.  There are two speakers online.  I put their name in the chat.  I'm also unfortunately not able to enable it.

While you're figuring out the technology, I will take advantage of speaking in the room so we do not lose more time and we will speak about the specific roles of states later.

Now I move to do you Eliska and the role of platforms and preparing for crisis.  I see Matthias.  If you don't mind, I'll still start with the state.  The floor is yours.

>> MATTHIAS KETTEMANN:  Thank you very much.  Thank you for the tech people for enabling, well, the tech.  This goes to show the power of infrastructure.  This is one of the topics we've talked about during our discussion of the different obligations of states and of companies with regard to the use of AI in content governance.

So I'll focus on the states first.  States, we know that, Julia, just mentioned that again, states have the obligation to respect, protect, and fulfill their human rights obligations including Freedom of Expression in the digital as well as the offline world.  It is their task to enact an effective set of rules making it possible for individuals to enjoy their human rights and making platforms stick to human rights.  They have to enforce human rights vis‑a‑vis third parties.

Now, specifically with regards to AI and in content moderation, especially in crisis situations, we've highlighted a number of important points, namely, states have to make sure that the overall media structure in their countries enables democratic discourse to emerge.  Democratic discourse is extremely important before and during the time of the internet.

In addition to a fixed set of rules, any policy shocks caused by crisis have to be cushioned by flanking effects by laws, by rules that make sure that crisis situations do not fundamentally challenge the media landscape.

Specifically with regards to AI, we have to ‑‑ and content governance in crisis, we have to ensure that we differentiate between short‑term, mid‑term, and long‑term crisis.  In short‑term crisis, platforms have to oblige platforms to take measures that are particularly fast and effective.  Measures such as targeted blocking targeting violence.  We see what happens if companies fail to do that.

Just a couple of hours ago videos from the massacre reemerged on Twitter because Twitter is now failing to take those down effectively.  Their AI has been reconfigured.  States have an obligation to ensure that platforms meet their human rights obligations.

In long‑term situations States have to oblige platforms to provide crisis protocols in order to assess the risks which they systematically have information flows and make sure that this crisis protocol foresees specific parameters used to determine which particular exceptional circumstances exist.  The role of each stakeholder and the action they need to take, clear procedures when to activate such a protocol, and safeguards to avoid adverse effects.  These are important elements, which are contained for the European Union's in the Digital Services Act we generally adopt them.

The establishment of efficient, crisis protocols and transparency rules in accordance with this manual OSCE has been working on is important to ensure human rights, including during crisis situations.  Those rules would be most effective if they're built on multi‑stakeholder engagement in close coordination with Civil Society including mechanisms.  States have an important role in that regards.

I end with that again, because it can't be said offline.  States have online and offline to respect, protect, and implement human rights Visa platforms because they deserve the rights also when they're online.  Thank you so much for having me of the.

>> JULIA HAAS:  Thank you so much ma thighs for this overview of obligations and referring to the regulatory framework that States should be in place to ensure our rights are protected online.

I want to pick up on one thing that you mentioned which is very important, which crisis are different and require different responses.  But we have some ‑‑ identified some similarities like short and mid and long‑term crises and impact of crisis.  Specifically looking at before crisis and trying to identify early warning signals or things when we could see how content governance and automated content governance could identify early on when there is crisis evolving and what the risks are.

When we look at due diligence safeguards where we have obligations for States but also by private actors.  Eliska, can you give us information on risk assessments, and also the Declaration just launched a few minutes ago.

>> Thank you so much it's great to be here to briefly introduce who access now for those who did our work.  We're an international human rights organisation that protect and defend human rights online users at risk around the world.  It's a great pleasure to be here.  We operate in several world regions where we work on policy as well as advocacy efforts to the rights and protection of human rights in an online environment.

Indeed, this is a timely panel.  Apologies for arriving a little bit late.  We just launched our Declaration of principles on platform governance and content governance during the time of crisis that we have been developing together with our partners in pretty much the last six months during different process of drafting experience and consultation.

I can't emphasize that the human rights obligation of States and keeping it in the core of platform accountability, platform responsibility, and content governance in general.  While our generation is targeting social media companies, the complementary between the actions and regulatory responses of States, their respect for the view of law and human rights, and subsequently more regulatory of platforms go hand in hand.  We actually see that on platforms in consistent responses in situations of crisis in different parts of the world where we can clearly trace how platforms responds in countries where they're under significant regulatory and public pressure and how their adequate response or crisis response lags behind in countries freely in the Global South, countries that are not Western or not in big markets where they have a huge problem with keeping their reputation and profit.

Access now could identify these issues including the latest illegal invasion of the Ukraine by Russia where applications of different carveouts from content governance, policies and platforms were communicated in a nontransparent manner.  At the same time these ongoing invasion actually showed how quickly platforms are able to respond and how they neglectfully lag behind with adequate response in other situations of crisis that are maybe not in such a public eye or under so much pressure.

So maybe a little bit about the logic that we used when we created the Declaration that is now available on our official website.  I want to emphasize this is a collaborative effort with our partner organisations including Article 19, center for democracy, but also organisations that directly operate in situation of crisis.  We were thinking about what kind of measures platforms should comply with before, during, and in the aftermath of crisis.

And this is not to say that in our view there is ever end of crisis.  Crisis is a complicated life ISOC.  These monitoring measures and different mitigating of risk measures have to be put in place it continuously by platforms.  And you specifically asked me how to prevent the crisis.  What are the measures that companies could and should put in place prior to crisis.

So we definitely put lots of emphasis on human rights due diligence safeguards that, of course, first and foremost include especially at the level before the crisis escalate the meaningful engagement with trusted partners on the ground, Civil Society organisation, and other independent stakeholders that operate in the country and understand the context of the country very well and can identify the first sign of possible escalation pretty accurately from the very start.

And we as a Civil Society organisation ourselves open the experience of engagement doesn't really exist or there are no proper mechanism that would lead to proper follow‑up on how our recommendations were implemented by platforms and whether in any way they actually informed terms of service or content governance policies of platforms to mitigate risks from their systems and processes including algorithmic content creation which place crucial role in a way you're able to access information in general and especially during the time of crisis.

Number of recommendations that you can see in the Declaration then depart from the human rights due diligence principles as defined by the United Nations guiding principles as well as a number of our international buddy.  There was a point that we reviewed and did our best to include in the Declaration.  These have to be designed in a way that they're able to address life ISOC of the crisis, the situation of conflict and the human vulnerabilities that are essentially connected to situation of crisis and should come first.

I couldn't agree more than what Matthias made that's to develop protocols across all levels in likelihood of risks that stems from platforms systems and processes.

This should be actually done not on ad hoc basis as we often see, not in a nontransparent manner that we don't have even proper information, what kind of crisis protocols platforms really use and on what basis.  This should be done prior to escalation of the crisis.

When number of issues that then result in rather short-sighted solutions can be pre‑mitigated and the effectiveness of this measure can be much higher.  Before I give you back the mic, I also want to go maybe back to the point that Matthias made and that's human rights obligation of space and how they're important in content governance especially during the time of crisis.

International human rights framework is a legally binding framework.  We often see states in pretty much across the world to abuse platforms for their own political agenda but it is a State sponsored propaganda, hate speech, insight to violence and so on and so forth.  We also know how crucial role these platforms play it especially during the time of crisis when they're alternative the last resort where individuals and companies can seek remedy or redress or measures and tools how to feel secure and safe in the online environment.

Gradually access now has been monitoring a number of cases how States abuse this vulnerable position of platforms in situation of crisis either more specifically through blocking individual platforms, targeting and blocking of communication, especially social media, but also through internet shutdown.  Julia nicely pointed out how damaging for human rights protection and people safety internet shutdowns can be especially during times of crisis where adequate access to information saves lives.

Around the world to the spread of disinformation and hate speech as a justification for this kind of very strong and negative measures.  And especially in times of crisis to name a few countries such as Azerbaijan or Kazakhstan who are participating in OSCE performed internet shut down precisely under crisis.  We observed longest shutdowns in history, that's Pakistan and in Ethiopia, Myanmar and cashmere.  Adequate access to information and how damaging internet shutdowns are in the long term.  This is closely connected to importance of platforms and technologies in the context of crisis and how fragile or how easily abused these platforms or by states can be if we don't have proper due diligence safeguards in place in the long‑term.  Thank you.

>> JULIA HAAS:  Thank you.  There's so much there to unfold again, I think the role of States and ensuring rights but at the same time also regulation by States.  Unfortunately we have seen repeatedly can be problematic or have been misused.

I want to touch upon one thing that you mentioned where I also saw Tetiana our next speaker has heavily nodded into the camera which is the whole discussion about meaningful engagement and engagement with Civil Society in the local context specifically.  This is relevant, as you pointed out in the preface of crisis but also continuously when there is a conflict or crisis situation.

Tetiana, of course, you are based in Ukraine, and you have very specific example of the war against Ukraine currently going on.

Can you maybe point to this collaboration that is needed during crisis and why it's so important to have trusted partners but to also have a coordination with the platforms directly and what framework for such collaboration would need to be to ensure that there is a constant and continuous risk assessment that leads to mitigation of challenges and that can really improve the access to information online but also the communication of everybody in difficult times.  Thanks.

>> TETIANA AVDIEIEVA:  Thank you very much.  And also thanks to the previous speakers who made the great outline and mapped all the important issues we would discuss today.  My name is Tetiana Avdieieva.  I'm a human rights lawyer in Kiev and serve as legal counsel for security lab in Ukraine.  We're working on human rights‑based environments through promotion of enhancement of policies of the private sector companies and also promotion of the legislative changes in Ukraine.

As regards the question of the agenda regarding the cooperation and collaboration between various stakeholders, I think that it's very important for the platforms to first of all, to develop the criteria based on which the stakeholders chosen for cooperation and make knows criteria publicly available for actually the review by the independent Civil Society experts and other individuals.

Why is it important?  Because actually multi‑stakeholder approach for the formation of the policies implies not only engagement with Civil Society but also at least attempts to build the dialogue with various States, especially in context of crisis.  I'm speak not only about the armed conflict but various crises.  We have recently experienced one related to disinformation during the COVID times.  In this respect it's important, for example, to remember that the States, although there are some times in most cases pursued as being biased are still very important actors, especially in cases such as now happening in Ukraine where the Russian illegal invasion influence the not only physical facilities but the information space especially inside social media.  Here it's important to take into account such criteria.

Human rights protection in the States, level of involvement in armed conflict, for example, the clear distinction shall be made by the platforms between aggressor states, defendant states, and third parties.  We know many States are now distributing information, for example in refugees.  Although they're not directly involved in the armed conflict itself.

Also, it is important to apply in such stakeholders such as media who are constantly trying to information in armed conflict.  They become victims of false positives and false negatives in artificial intelligence in terms of working policies.  For example, publishing the images of victims of violence, rape, murders, and so on, which the fact is a legitimate content and shall be preserved in evidence.

Finally in Social Security and for them it is important to know that organisations have to be independent, political and neutral and aware of the regional or local context.  It's particularly important, because sometimes social media are trying to look for bigger, let's say, stakeholders who are organizing regional hubs or who are international organisations, which deal with issues in a particular region are not very specifically aware of the peculiarities of the situation at stake.

Why engagement is needed at all, I think it's like quite a general question to be asked in such circumstances, but it is very important one.  Because one of the most evident reasons is language.  Not all stakeholders, not all social media platforms actually know the local language.  We have the case of Myanmar where really the crisis happened and further confirmed by independent fact‑finding mission in 2018.  But only knowledge of language is not enough.  That's important to remember.  For example, the person from Canada knowing Ukrainian most probably won't be aware of the semantic peculiarities and the specific words which are used in the context of the armed conflict to replace certain words and to go through the algorithms without being banned.

It is very important to have people engaged who directly deal with the cases which happen on the ground.  Also, it is important to engage the local communities to anticipate threats.  Here I wanted to say that probably one of the biggest ‑‑ before the war in Ukraine in 2022 was that they tried to maintain reactive rather than proactive approach, namely react upon the cases which are already happening.  React upon the crisis dependent on the peculiarities of the crisis, but when it already develops.

The main task now is to develop the proactive approach.  Here I absolutely agree with provisions of the Declaration to which I was very likely to contribute that we have to contribute to the development of the crisis protocols and work on enhancing the human rights protection framework, not only during and after crisis like in conclusions but before they start to have the fastest reaction but to mitigate the risks.  I will stop now and give it back to Julia.

>> JULIA HAAS:  Thank you so much for this overview.  So many points we could follow up to.  I think it's really important what you pointed out, now also in the end as saying that we need to be ‑‑ we need to ensure proactive responses which also links to how you started your presentation with saying we need clear criteria that are developed in inclusive manners, that are developed transparently, and that are the same and applied in the same manner irrespective of the conflict and the crisis and ensuring there is this content realization regarding to the language, and the cultural issues.

I want to now move to kind of like a post‑crisis context because this is something we have not have much focused on yet but at the same time I think what came out of the conversation so far is really this need to assess and anticipate risks before they're kind of developing into violations of free speech and human rights online from both sides.  From the sides of businesses, online platforms, and from the side of States.

If we look also at when crisis or conflicts are kind of like dissolving a little bit or in a phase of reconciliation, there are, of course, continuous challenges.  We know this also from the offline world and when we speak, for example, of the inclusion of women and peace building efforts and all these kind of issues.  If we look at the digital sphere, this provides a possibility for more inclusion if content governance would not replicate the biases from the offline world or in a worst-case scenario even amplify them.

So from a gender perspective specifically but not only looking at this post‑crisis situation, and I think this is important especially why we're now in the 16 days campaign against violence against women, what is kind of like needed to ensure that content governance and specifically automated content governance can help recon silt and can help rebuild communities and really help building peace and comprehensive security in the long‑term?  Marwa.

>> MARWA FATAFTA:  I think the conflict is gendered.  Because at the end of the day that sustain our communities are women and young people.  They're really at the front lines of the conflict.  They're trying to ‑‑ they're trying to help their communities at a local level but also at a regional level most states really react from an emergency perspective.

So just after a war breaks out, then we need to find out what are the solution that we should implement.  Which doesn't really work, because it's really emergency driven.  At the end of the day, we need to sustain the efforts.

So the outlook is that we have a lot of frameworks in different parts of the world such as cybersecurity frameworks.  But they're not really looking into how different people experience online spaces.  Because, again, at the end of the day someone from sub‑Saharan Africa wouldn't really experience the internet as someone from northern Europe.  And this is something online platforms have a hard time understanding

    Unfortunately the reality check that today some wars are really worthwhile than others.  Online platforms are being more selective in the way which way which war is more worth their interest and the way they deal with it.

As we've seen in Myanmar, we've seen in other parts of sub‑Saharan Africa, the way social media platforms reacted wasn't the way they reacted, for instance, in Ukraine.

The fact is that today we have content governance that is not rooted in the local context.  A lot of content moderations are higher ‑‑ they're hired in an emergency perspective.  They're trained in a week to understand the conflict, to understand the perspectives of different people.  A lot of community guidelines they're not really even translated into local languages.  So it was really a long fight with online platforms to have guidelines in Swahili, Arabic, other dialects from, for instance, the continent here, since the IGF is in Ethiopia today.  So it's really important to push for more local context when we talk about content governance today.

I think, again, like going back to the post‑crisis, we really need to think more expansively about conflict in a digital age.  Because at the end of the day, once the conflict starts offline, it also starts online.  And there's a lot of weaponisation of data that is related to sexual reproductive health, data related to survivors of rape, data that is related to women human rights defenders, and to really other defenders that are at risk.  They risk their lives.  Unfortunately we don't always have the capacity to help in that sense.

There's also the issue of refugees, people fleeing wars that end up not being traceable.  They don't have the data.  We can't really help them access to really vital services.  So this is something to think about in the context of, again, like peace and security.  It's also a recommendation for governments to think about security within a development approach that is centered around human rights.

I know it sounds a lot, but at the end of the day, if we just keeping thinking about security tenets as the state and national cybersecurity framework, we wouldn't be able to provide services that look for what is happening on the ground.  And we end up having people without papers, without access to services that are being threatened, that are being subject to sexual human trafficking that starts online that are subject to extreme offline terrorism.  It's really a comprehensive outlook that should be taken when it comes to conflict settings.

We need to, again, have this conversation about global solidarity when it comes to having conflicts in the Global North versus the Global South because the reality is still striking and the way social media platforms work with some settings is not similar or equal, or they're the not really invested in a lot of regions where not only we deal with conflict, but we also deal with a lot of issues pertaining so social justice and inequalities.

>> JULIA HAAS:  Thank you so much.  I think two things that I want to reemphasize because you put them so nicely.  That online and offline are not two separate spaces but it's so connected today that online violence and offline violence is closely interlinked.  There's more than enough evidence.  The second point is that content governance needs to be rooted in human rights and local context.  I think this is crucially important.  Thank you for making those points.

With this, I want to open the floor.  We still have 15 minutes, if there are questions or comments in the room and also online.  My colleague Deniz will be helping with comments from our online participants.  If there is ‑‑ while you're still thinking about questions and your comments, I will start with one that came up earlier when we were discussing about also another aspect that I think is relevant in the context of content governance and specifically crisis is that we heard that often the current kind of like AI built information spaces are never ‑‑ information operations information is being weaponized.  Disinformation is spread online which at the same time is also an early warning sign of conflict or crisis in many contexts.

I want to touch upon this.  Tetiana, you mentioned this before, I would kindly ask you to first take the floor but also open it up to other panelists.

How can kind of like content governance and also automated content governance be addressing this phenomenon and this problem without constraining rights but rather being conducive to inclusive conversation and digital rights of all?

>> TETIANA AVDIEIEVA:  Thank you for the question.  I consider it being particularly important and actually it will be built upon my previous intervention.  Of course, it should be addressed with the cooperation of the local actors, because actually the problem is weaponisation of information and the fact that they're usually started long before the crisis, any type of crisis actually emerges.

Also, those types of information, those spaces of news can be very origin looking from the first glance namely the warning of the messages delivered can be relatively neutral.  But the amount of such information and the scope of its distribution, that's what really matters.  That's what really makes the impact of the weaponized information.

At this particular point in time I would like to provide the example of Ukraine and how it actually started.  For example, legal concepts and legal narratives are now weaponized and now are manipulated.  For example, the concept of failed state which is used by Russia against Ukraine.  The concept of responsibility to protect by which the illegal invasion is trying to be justified, and the concept of genocide, which is misapplied to the actions of Ukrainian authorities and actually by which Russia tries to justify its illegal invasion.

When we're speaking of disinformation, namely, absolutely misleading facts, which can harm particular audience.  That's one issue.  But when we are speaking about misinterpretation of the legal concept issues, much more difficult.  Why is it important and why it relates to platforms?  Because actually platforms have to deal with avalanches of content which also contains those manipulations, misinterpretation of legal concepts, misinterpretation of historical facts and events.  First of all, they have directly cooperated with the local stakeholders.  Secondly, they have to have really good lawyers.

And also one of the probably most important pieces of advice from my side is for the platforms not to wait until international court of justice, international criminal court or any other international institutions provides its decision but to start reacting like when the problem appears.

Just because otherwise it will grow in its size and be absolutely devastating for human rights.  Also platforms that would be the last remark from my side they should not initiate cooperations themselves when they see that the problem exists, but they shall be open to cooperation whenever their local stakeholders and local partners see the problem or anticipate the problem appearing.  Because when the platforms cooperate only when they want to or only when they consider it necessary to cooperate, it means they're already late.  Because when they see the problem, it means it already exists.  While local partners can indicate the problem while it either does not exist at all and is only anticipated or in its early stages.

Thank you for your attention and looking for the feedback from my colleagues.

>> JULIA HAAS:  Thanks.  You would like to add something?  Are there any questions or comments from the floor?  Yes, I see two, three.  We will take them.  And we will hand back.  Is there another microphone?  Yes.  Okay.  Thanks.

>> PARTICIPANT:  Shall I go?  My name is Jacqueline.  I'm from the global partners digital.  Thank you so much to the panelists for this discussion so far.  My question relates to.  We've spoken about States responsibility and the responsibility of major online platforms.  But I wanted to ask the panelists what they think of the third‑party content moderation software providers are and whether that's a step in the right direction in creating more locally sensitive content moderation and touching on what the EU says about the operation but what extent does that have toward functions internally?  I would like to hear your thoughts on the balance between those two.

>> JULIA HAAS:  Thank you so much.  We don't have too much time left.  We will collect the questions.  We have here in the first row and then over here and then on that side.  Thanks so much.

>> PARTICIPANT:  I am from Ethiopia.  Thank you for the panelist and representation.  I want to speak on the internet shutdown because I think international ‑‑ has some limitations and considerations in putting limitations in the rights.  There is a necessity.  It's good for democracy, and there will be some limitations as you have pinpointed.  There was an internal shutdown in Ethiopia because of this crisis.  And how we should balance the effect of shutdown representation of rights.  If we simply open it, it may have very big problems.  If we simply close it, then it has its own problems.  What is the balance in proportion for us to exercise?  Thank you.

>> JULIA HAAS:  Thank you for this comment.  We will also respond to that.  I will take the question over here and then two more.  And I think we can go five minutes over, but we will have to be very ‑‑

>> PARTICIPANT:  Thank you.  (Too low to hear) I would like to talk about the regional comparison.  I don't know if anyone has looked at it in terms of content moderation.  We are doing research on content moderation and what we're realizing is that the African regions have very less attention in terms of reporting.  Yes, it's a problem across the board that there's not enough transparency when content moderated it's not given sufficient information.  Even when you look at the transparency reports, there's no desegregated data or digital data about Africa itself.

For instance, Twitter, TikTok, not only them, but political reporting also, you can find Google has specific reporting on political advertising on some regions but none of the countries in Africa.  And yet there's so much influence especially of foreign actors in Kenya, in African elections.  I won't go deeper, but I would like to contribute to that.

>> JULIA HAAS:  Very important point because it's the same for resources being spent for fighting disinformation.  I think here another question and comment.  And then one last on this side.

>> PARTICIPANT:  Hi.  I am from (?) my question is about deep fakes in accordance in situation of conflict, the security environment and the platforms.  What's your suggestion on that?

>> JULIA HAAS:  Thank you.  Can you maybe hand over the microphone ‑‑ sorry.  Yes.  Thank you so much.  It will be very easy for our panelists to address all these important points in two minutes.

>> PARTICIPANT:  My name is Juliana I'm from the commission for human rights.  While we have seen that there's no (?) and stability in the enforcement of platform rules or platform policies during these emergency state.  In this emergency state would be ‑‑ emergency constitution that establishes procedures and also gives more emergency powers to platforms be adequate to address this problems?  Or it could just like emerge other problems?

>> JULIA HAAS:  Also, very important point.  Thank you for ‑‑ yes, okay.  Maybe you can still say something very briefly but then we have to close it down because there will be another session here in 10, 15 minutes.

>> PARTICIPANT:  Thank you.  (Too low to hear) I have one question during the crisis and not (Too low to hear) the service from local area and other Ethiopia but international community (Too low to hear) only they complain about Ethiopian side.  So everybody should have balance it right away neutrality and stability of the media.  So the media has (?) rather they ‑‑ they have information about, because it's important ‑‑ so never any kind of media of such condition that release the information.  We have to balance it.

>> JULIA HAAS:  Yes, so the topic of public interest and media content is also very relevant.  I will quickly repeat points we just heard and make a quick round with all panelists to touch upon any points you found relevant and I'll try to close it with conclusions.

The first question was pointing to the question of the role of third parties and human rights content governance tools that could be developed by software providers and referring to interoperability.  The second point was focusing on the limits of Freedom of Expression in line with international human rights and specifically the question of proper ‑‑ the third question was less attention not only in the resources but also in the reporting of global platforms when it comes to regional, to specific regions around the globe.  The fourth point was on how we can address deep fakes.  I'm afraid we won't be able to respond to that in 20 seconds.  It's a crucially important point.

We also had a comment on public interest content and media information.  The last point was a question of state of emergency and whether there might be some crisis protocols leading to platform specific different obligations depending on whether we're in a state of crisis of or not.  Eliska, you can start.

>> ELISKA:  Wow.  So with the first question I think that you are referring to for instance, the idea of third‑party recommender systems that could be achieved through interoperability measures.  By the way that provision is in digital market act.  We never succeeded in the Digital Services Act.  That's one of our painful losses in that fight.

But that topic still remains on the table for Civil Society to major priority to explore those options, especially also when it comes to independence of media.  As you know, if you put it in the context of EU legislation, there's a new regulation on freedom media coming up where we will look for avenues to push for that.

If third‑party recommender systems developed by Civil Society organisations, for instance, with relevant expertise that are done for smaller communities based on community standards that much more reflect their values and interests, in that way you as a user gain more empowerment and control over how your news feed is being organised and reflects your set of values.  For us as a perspective as a digital rights organisation is a win/win situation.

Regarding the research that was mentioned, I think it was someone on this side, I would love to receive those resources.  Please don't hesitate to reach out and connect on that and discuss further.  We're super interested, for sure.  Regarding the issue of internet shutdown, I couldn't agree more Freedom of Expression is not ‑‑ there are restrictions as you rightly pointed out ‑‑ necessity and proportionately.  I have a hard time believing that internet shutdowns are as extremely far-reaching measure would meet those criteria prescribed by international human rights law.  However, there is a number of recommendation put forward how to legitimately combat the spread of this information and tore negative societal phenomena that are manifested online Access Now put forward a number of organisations, international human rights body and the core you also mentioned very much explain what it means to establish human rights‑centric content governance framework.

And these measures should be definitely applicable first instead of such of a measure as internet shutdown in our review.  Regarding the comment on special crisis response mechanisms, actually indeed it does exist now even under European legal framework where we have ‑‑ where the EU can request special set of measures from very large online platforms once the crisis escalates.  But we have to be very careful with those measures and how they're being implemented and who has the main oversight and enforcement power to decide what those extraordinary measures should be.  Even who actually defines what is the crisis and when does the crisis occur.

It might be the right way but with appropriate safeguards in place.  And there is definitely much more to say.  I'll stop there.  I'm speaking too long.

>> JULIA HAAS:  No.  Thank you so much.

Matthias, Tetiana, do you want to add something.  Matthias, you're unmuted.

>> MATTHIAS KETTEMANN:  I think the idea of ensuring more community engagement is so important.  There are a number of essential new developments in that regard including the construction of platform councils, which have their flaws, but which can be used to increase the input and output of legitimacy of rules of platforms, either through experts or, even better, through community engagement as UNESCO has done in a recent project, social media for good, for instance.  It's important to get the people, those who are involved ‑‑ who are suffering or influenced by how platforms moderate.  They have to be involved in setting the rules, linguistically sensitive and to the demand.

>> JULIA HAAS:  Tetiana, two sentences you want to add?

>> TETIANA AVDIEIEVA:  I'll try be to very, very brief.  I wanted to comment on the shutdowns.  My answer to the person coming from the conflict of the region, it is absolutely no if it's used as a tool for content situation.  I cannot see where it's proportionate.  It can be proportionate for a very short period of time when military aggression takes place in a particular region or like in a particular localized area, but only for the purposes of that operation.

So it is never justified as a content restriction from my perspective.  Also, the question regarding neutralities is always interesting for me.  Because I, from the start of the full-scale invasion, I drew this strict line in my head between the content stemming from the aggressor state and content from the defendant state and the social media started doing so.  The main reason is that, for example, when we are speaking about propaganda for war, propaganda for violence for the aggressor state, it is a manifestation of illegal aggression.  For the defendant state, it is the only mean to safeguard sovereignty.

So if the actions of violence can be taken in self‑defense, the question ‑‑ this question remains open still ‑‑ is whether the call for self‑will resort to violence would be under the Freedom of Expression standards.  That's what social media are now trying to address if videos from the defendant state are constantly published whether that would be a call for violence.  If yes, whether it is justified and shall remain available on the platform.

Finally, regarding the question from the colleague from inter‑American Commission on Human Rights, I think that in cases of emergency it's very important to have the pre‑developed crisis protocol which can be adjusted to the local circumstances rather than to develop the response on spot and doing it in mistakes, attempts, something is successful, something is not.  So the framework has to be preestablished.  Further on, it is adjusted to the development of the situation.  That's it from my side.  Thank you.

>> JULIA HAAS:  Thank you so much, Tetiana.  Maybe to add this is not only your legal assessment but also the special procedures of the UN special Rapporteur and others have already indicated that complete internet shutdowns are never proportionate.  This is not an assessment of yourself but shared internationally.  You have the pleasure around for final words before I will close this.  What do you want to add Marwa.

>> MARWA FATAFTA:  I literally forgot.  I guess for the third‑party providers, I guess pour me it is problematic if they're based in restrictive authority Aaron settings which create a lot of culture impunity.  As long as there is public transparency regarding the third‑party providers, then we can still negotiate that.

And I guess regarding the African reporting and why is it less, is it less common than other regions, I guess, first, because of the languages that we have on the continent.  Second, because of the lack of digital literacy.  So a lot of organisations, they don't know how to report.  They can't really access to the reporting mechanisms.  Sometimes we push for social media platforms or for the oversight board which is currently working on that specifically.  Yeah, I think also for the deep fakes, I guess they're really changing the course of the war.  Creating a lot of polarisation.  This is something to take on consideration when we're talking about also content governance.  I think it's a conversation to have on its own as Julia said.

>> JULIA HAAS:  Thank you so much.  Thank you so much to all of you.  There are a few points in closing to summarize.  First, we need proactive responses.  We need crisis protocols based on human rights impact assessment that are consistent, that are transparent, and that are developed ahead of crisis.

We need content governance to be rooted in human rights and local context, and we need meaningful engagement and sustained efforts that are really protected through regulatory frameworks that are focusing on the processes and not on the contents e focusing on reach and not speech.

Thank you all so much.  We will really take this forward in the OSCE please take a copy of the policy guidance that we have developed so far.  And we are now really, based on all of your feedback at the great contributions we've received today develop further in the context of how States should be implementing and free speech safeguards in the context of crisis pre, during, and post crisis to ensure that it is conducive to our digital rights and we can communicate freely.  Thank you, again.  Please stay in touch.  Please take one of the copies.  Thanks.  And thanks also to online.