IGF 2018 - Day 3 - Salle VIII - WS452 Community governance in an age of platform responsibility

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> JAN GERLACH:  Good morning, everybody.  This is Workshop Number 452, Community Governance in an Age of Platform Responsibility.

     So if you're here for that, welcome.  If you are not here for that, also welcome.  Please stay. 

      (Laughter.)

     >> JAN GERLACH:  You made it here.  You might as well stay. 

      My name is Jan Gerlach, Senior Public Policy Manager at the Wikimedia Foundation, which hosts Wikipedia.  I'm joined by Anna Mazgal from the Free Knowledge Advocacy Group of the EU.  And Jorge Vargas, also from Wikimedia Foundation.  Juliet Nanfuka from CIPESA.  And Jochai Ben-Avie from Mozilla.

     The topic of this session is as the title says community governance models of community self-governance on the Internet as well.  And I just want to give a very quick overview of what this is meant to be.

     We, all of you are seeing probably increased pressures and appetite of policymakers to regulate platforms, sort of top-down to really change intermediary liability regimes to make Internet platforms, websites police what is going on on their platforms content, behavior, to comply with local norms, national laws.

     We've also heard about this from Emmanuel Macron two days ago.  There is a clear tendency to really make the Internet work again, whatever that means, to rebuild trust as the title of the IGF here also suggests.  And at the Wikimedia Foundation we see this with growing concerns on Wikipedia people have built policies for themselves, how they want to write a Wikipedia online encyclopedia, how they want to interact.  They have done this before the Wikimedia foundation was established.

     People on Wikipedia talk to even other about policies every day, how they should govern themselves as a community and also the platform.

     We really believe that the law that policy and regulation should leave room for this.  Because the Internet really gives us the opportunity to self-organise, to agree or disagree, but find common ground on policies that make the Internet work and that make the Internet a trustworthy place.

     So today with my speakers I want to discuss in a conversation that should also include everybody here in the room because there's a lot of expertise, obviously.  How such self-governance models on the Internet can look, how they are developed, how communities police, what doesn't work also maybe for self-governance because sometimes a platform may have to step in because something goes bad.

     And we want to talk about how conflicts are resolved, all within an hour.  And what this means for users and cohorts online.  What does this mean for freedom of expression, for agency of people who want to help build the Internet and make it grow and continue to be a flourishing place.

     So I think we can go ahead and jump right into a discussion of this.  Maybe my four speakers can briefly introduce themselves first.  We are not going to have panel-like statements.  It is just going to be a conversation.  I invite everybody in the room to raise your hand if you have a question, a comment, and really be part of this conversation here.

     Do we want to start?

     >> JOCHAI BEN-AVIE:  Good morning.  I'm Jochai Ben-Avie, from Mozilla.

     >> JULIET NANFUKA:  Juliet Nanfuka.  I work for the Research and Communications Officer for the Collaboration on International ICT Policy for East and Southern Africa.  CIPESA.  It's a mouthful.

     >> (Speaker away from microphone.)

     >> JULIET NANFUKA:  Why didn't you warn us?

     >> XIANHONG HU:  My name is Xianhong from UNESCO.  Welcome to the house.  We hope to have an interesting discussion.

     >> ABBY VOLLMER:  I'm Abby Vollmer.  I'm a Senior Policy Manager at GitHub.

     >> JAN GERLACH:  All right, thank you.  So I've talked just very briefly about Wikipedia community as a self-organising body.  Actually also in real life where people meet not only online and I want to take this opportunity of having Abby here working at GitHub where there is also a community of people who really build things and ask you how the GitHub community has maybe developed and what, what the model is that they follow or maybe there are many models that people on GitHub follow.

     >> ABBY VOLLMER:  Yes.  in case there are people listening who don't know GitHub, it's a software development platform, it's where most software developers build software.  Building software is actually quite a complex process, or it can be, involving potentially thousands of people, even millions of people, I suppose, who can collaborate online on projects. 

      And I'm saying that at a top level because so I can get to the community collaboration part.  When you are collaborating on the Internet it means that people from everywhere can be able to contribute to that project and whoever is maintaining that project, which we heard about the maintainer, is the person who is in charge of moderating what happens in that project.

     So at GitHub we try to help maintainers, empower maintainers to be able to moderate effectively.  One thing that is really helpful about open source communities and the rest of the world using the Internet, there is a lot of similarity between what people are trying to achieve.  We want an open, inclusive, vibrant platform that is free of abuse.  That's also what maintainers want for open source projects.  They want a lot of people to be contributing, want people to feel comfortable there, an inclusive group of participants.

     In terms of how this model might be useful in other discussions, I think there is something there.  One point to maybe, just to be fair about how applicable, I think that it is useful for open source maintainers that everybody who is in their project is generally trying to build software.  And so that is a sort of refusal selecting group that is fairly unified in a lot of what they are trying to achieve.  If you compare that to some of the other platforms that are more general use, it may be harder to kind of set rules that everybody sort of is going to get behind.

     So I think that said, I want to give other people a chance to speak but setting rules, being clear about what the rules are, being clear about enforcement, all that sort of thing, I'll get into that more as we all talk.  Those are the level of principles that I think could be useful.

     >> JAN GERLACH:  So maybe the dichotomy between general purpose platforms versus like expect platforms for building software?  Or building an encyclopedia is what you are describing.  Sort of makes a difference here.

     I wonder, Jochai, whether you want to weigh in on this.  Obviously Mozilla is about open source but doesn't run its own platform in a strict sense.

     >> JOCHAI BEN-AVIE:  In some ways yes and some ways no.  The final observation is, I think the law certainly used to conceive, in many jurisdictions conceives of company develops product, company has liability for product, right?

     I think we are seeing a wave of companies, big and small, who are adapting open source into their core business practices, into their core product development.  We are seeing that even Microsoft, probably the sort of exemplar of proprietary software in the age of Internet history, really adapting to open source models.

     We see this in Facebook in some of their projects.  We see Android at the end of the day is an open source project.

     But I think we need to sort of evolve some of our thinking in the law and in policy around the liability structures there.  How do we do that?  But it is really important to be thoughtful about what is your contribution model?  Android, Mozilla, signal, Drupal are all open source projects but have different contribution models.  There are a lot of benefits to being an open source project.  Signal, which is open source, has said that their use of the GPL3.0 license is primarily for quality control and to give people a sense of trust in the product that they can sort of take a look at that code.

     But you are going to be, for a product like Signal, hard-pressed to find someone who has as much commitment and driven mission to sort of really deliver on the product roadmap of that mass market product.  That is going to be different than something like Russ's, which is by its nature sort of a very wide open project where the sort of maintainers and primary developers are generally willing to pay high opportunity costs in terms of code not written or other things not done in order to help onboard new contributors and bring folks into the project, help them to give insight into design thinking.

     Just sort of one other example, Drupal, as sort of a really, Wordpress is also in this model.  Pretty diverse ecosystem.  A lot of people contributing code, writing core pieces of the core functionality where you sort of have a lot of great sort of governance models.  You have committees that deal with and provide a path way and deal with challenging circumstances.  At the end of the day there's a benevolent dictator who sort of says at the end of the day, this is the way it's going to be.  The community governance can't sort it out.

     And maybe I'll pause there.  We'll come back.

     >> JAN GERLACH:  Thanks, Jochai.  I'm wondering, what are ID governance models?  We heard two pieces of open source communities.  Many.  Are there other governance for communes out there?  I sometimes wonder about research into actual Facebook groups who within them have a founder, a benevolent dictator in a way, but within them, what is the model in the groups, how people interact.  Are there similar models that we see in there or other platforms that theoretically have a very top-down approach with terms of service, with also policies that evolve a lot and that make it sometimes hard for communities or people who find each other on these platforms, to know what they can say?  We don't always have to think about actual bad things, but we all know about Facebook deleting pictures of breasts or nipples, things that you cannot show, right?

     Is there room for this?  Do we see models where communities can within those platforms self-organise to a certain degree?  Maybe Xianhong, do you see this?

     >> XIANHONG HU:  Thank you on for racing this topic of Internet Governance.  The Internet is an experiment of governance model.  We already heard many, there are several aspects we can think about.  I work at UNESCO mostly on the freedom, Internet Governance, but I need to be a media person.  I still have a passion for journalism.

     So if you look at the history how the media and the media content was governed before the Internet, they also have quite a constructive self-regulation model.  We always say media should be independent, should it be free, pluralistic, but how do you guarantee that?

     One challenge is to make media independent from any pressure, whether from governments or from the commercial force.  So we have seen in many countries, Nordic and in Africa also, also in Asia as well, there is a press council, the independent media council an also journalism association, they were compelled.  It is like you are committed in structure to, once there is a complaint from the readers about the content, it is up to this committee, which is self-regulated.  This press council consisting of all the other experts and journalists to decide whether we should apologize for fake news or whatever.

     But that mechanism seems not completely applying to the digital age because if you ask Facebook, social media, they don't recognize themselves as sort of media, right?  They have actually, the user yen rated content also distinguished from those content produced by the professional journalists.  You ask a social media users, do you bind yourself to the ethical standards of journalism seems unrealistic either.

     Now we have seen the trend that many social media platforms, we are having the self-regulation by the company.  And they are allocating thousands of staff to work on the media, work on the content moderation.  Not to mention now the trend is everything is to have oh, look, the artificial intelligence, the algorithm, the automatic can be more effective in handling so many fake news, misinformation, disinformation, all kind of manipulated content.

     There are not -- there are many problems into this still.  You see that nature is quite top-down approach.  And that is why from here I see the power of this sort of very bottom-up process as the open source software, as the which can peed has practiced.  It seems to show us a new way.  Really depending on the grassroots, the users, the people who own their content, who produce the code, who invent this, coming to decide, you know in a collaborative way what should be the proper content and information flow on the Internet.

     So I think it is quite positive approach to it.

     Also it is gives me some thoughts to think about the so-called multi-stakeholder approach.  It is a popular word at the Forum.  At UNESCO, we want the Internet to be based on human rights, to be driven by the multi-stakeholder approach, but when we talk about the multi-stakeholder approach very often we discuss at the global level, at the national level.  But not so often we think about the community and local level.  So that's also, I think if you want to advance this approach you should really get the community of users, the individuals to be in this process.

     >> JAN GERLACH:  It's a challenge.

     (Chuckles.)

     >> (Speaker away from microphone.)

     >> JAN GERLACH:  Well, the speakers are over there, so maybe a little bit, yeah.  I think we can work through this.

     Thanks so much for this. Juliet, maybe taking the last thought that Xianhong had, local context.  You work very much in a regional context with CIBESA, what are some of the challenges to community governance models that you see in our region which is southern and Eastern Africa?

     >> JULIET NANFUKA:  I think I will speak more to what Xianhong just said.  You spoke about what is happening within the platforms, how people are self-regulating, self-governing in the spaces.  We see a lot of excitement.  I didn't mention when I introduced myself, I'm involved in, I like them, like a fly on the wall, I love watching what is happening.  So I monitor at CIBESA.

     When we see people open a space online, what they underestimate is the amount of time that goes into or that is required to ensuring that the space is safe, that it is not taken over by other individuals.  That it, one, remains comfortably within the Facebook, in this instance, Facebook community standards before it also remains a safe space for whatever the nature of the content being posted in that space is.

     So I'll take the example of a group set up to, a community of journalists online who also have, you know, they are following the traditional journalism codes of conduct and ethics and principles, but while they are doing it in the traditional practice, they won't do it online.  Online things will get violent.  The language will get very, very fiery, the content is far from what you otherwise expect from a journalist.  Quite often we see that the community or group of individual who established that space are not sure how to deal with the content in that instance. 

      In some cases you'll see a bit of an argument emerging between the owners of some of these groups.  It shows that there is still a bit of uncertainty in how to actually manage this space as a community, even if it is just a small group of people of say 500 people.  That plays out in the greater scheme of things as well.  Who should do what?  Who should have the power to remove, to point out irregular content and what happens thereafter.  And we are looking at spaces where people tend to know each other.  There is the dynamic of physically interacting with some of the people. 

      There is gender dynamic.  We will find a lot of these groups set up with a very male dominated type of content coming through.  I keep referring back to this group of journalists, a journalism group, but half the time the content is the latest scores on soccer.  Nobody pointed this out.  They were removed from the group and nobody spoke out on it.  So it shows that even the community of people within the group are not too clear on what the nature of content should be even though it is clear in the title, say for journalists in east Africa.

     There is a bit of digital literacy that is required, a bit of common sense, which they choose not to exercise in some of these spaces.

     Also I think that the greater national regulatory contexts also fuel the type of content, the type of narrative that is in traditions in such spaces.  We see quite a bit of self-censorship on key issues.

     When an issue is brought up, someone will be called out about talking about it.  You'll get arrested.  That is unsafe to say, or it's an immediate delete or blatant attack on other members of the group.  There's still a bit of uncertainty, even though the idea of a governance model from the bottom up is appreciated.  How to go about it still remains up in the air for some of the spaces that I've seen.

     >> JAN GERLACH:  How to go about it.  I'm wondering, is there a general recipe for building community policies?  I know that's sort of a curve ball here, but at Mozilla, you do an Internet health report.  You work a lot on Internet openness.  You have a very, I would say very broad horizon where you just monitor what is going on.  And is there something that this is how communities develop?  Is there something like that?  You mentioned different open source communes, but at the broader scope is there a recipe for this?

     >> JOCHAI BEN-AVIE:  I mean, I think picking up on something Abby was saying earlier.  I think there are best practices here around documenting, you know, sort of what is the purpose.  Like what is, you know, sort of what is bringing you together.  That's something you can refer back to later.  What is the actual sort of governance model for that.  You know, who sort of is empowered.  What are the escalation pathways.  Having a cold of conduct is important, but almost as important is who do you report violations to?  What happens when someone violates it?  And I think being clear about that.

     I think some of the other organisations here are pretty radically open which is kind of interesting.  What does that look like?  Especially at scale.  Those of you who are interested, Mozilla uses a module system which is a clunky word to say there are a lot of different parts of the Mozilla project that have owners and parties who need not necessarily be Mozilla Corporation employees or Mozilla foundation employees.  If you look at the Mozilla public service, for example, the MPL1 was written by Rachel Baker, one of our founders and Chairwoman.  A lot of MPL2, Mozilla worked at the Wikimedia foundation, he no longer works at Mozilla but is still a peer and contributes at a high level and if there was an escalation, if you raised a question about the MPL, Luis might be a better person to talk to than Mitchell.  I think she would say that.  If you are interested you can join our governance mailing lists where we talk about things.

     I think that there's a certain amount of process.  I think there's a certain amount of being thoughtful about the structure.  So we released earlier this year an open source architects report that lays out different types of open source projects and how your contributor community looks differently, how the incentives are different.

     First, figure out what you want to do.  For different products, even different features, you know, different parts of what your organisation is trying to do you might have very different models.  And governance structures and policies and practices that fall off of that.  So I think that sort of intentionality is important and then document, communicate, seek feedback, iterate, repeat.

     (Chuckles.)

     >> JAN GERLACH:  Yeah, well, document especially, right?  I mean, all of those things, but when I look at how Wikimedia communities govern themselves, documentation, being inclusive for people, also time-wise.  When you are in a distribute the project around the world, different tie zone, don't expect an answer to your email within the next hour, right?  I think we've all kind of learned that this is not happening anymore anyway, but document what you are doing and give people time to read up on and be part of this.

     >> JOCHAI BEN-AVIE:  If I can venture, engage the communities you are seeking to serve.  I think even some of the best intention companies and organisations, they are trying to do the right thing.  They are trying to move quickly, but be thoughtful about who are we affecting.  How are they involved in the process?  How are we building policies and practices that are driven by those users?  And sometimes you see pretty high profile missteps. 

      I think when we look, we are probably familiar with some of the struggles that Facebook has had in Myanmar.  When you don't have any content moderators in Myanmar who speak Burmese, you're going to run into some challenges.  We can probably point to other examples in many other countries as well.

     >> JAN GERLACH:  Yeah.  I think, I don't want to bash on Facebook here, but it is a prominent example obviously.

     In the first round of our conversation here we've always talked about very like specific purpose project, right?  Open source development in different ways.  You mentioned journalism or the media in general and how journalists basically organise within Facebook.

     But this makes me wonder:  Is there even room for a large scale general purpose platform with a self-governing community?  Is there even room for that?  That is basically what we at the Wikimedia Foundation talk about a lot.  Not general purpose, but that large scale p communities can govern on a platform.  Can that model be transferred to a general purpose platform like Facebook or Twitter?  Will people be able to self-govern in that way and at that scale?

     I wonder, Abby, you seem to have thoughts on this.

     >> ABBY VOLLMER:  Partly because I gave this disclaimer about maybe everything that I'm saying is easier for us because we have some self-selecting community members, but I do want to talk a lot about how the things that happen within open source communities can be useful.  I think for just in terms of the layers of moderation, at the top level, I guess you can think about laws that are out there and then at a company like ours that is dealing with communities that are able to do some level of moderation on their own, we find that the company itself, our legal team will come in and do a layer of moderation on top of that.

     I think that if communities are doing a good job, they are, there's less opportunity or need for legal teams of those platforms to do more.  And there's less need for governments then to do more.  I think to the purpose of this conversation, if we can help the communities do a very good job there's less need for a lot of intervention all the way up.  I think in terms of that we try to help our maintainers think about how to create welcoming communities and moderate effectively and give them tools to moderate effectively so that you don't need to have various other interventions.

     Some examples of that, for moderators to think about the perspectives in their own communities that they are moderating to give everyone the benefit of the doubt, assume that people are there with good intentions.  Keep the conversation on topic.  If people are being harassing or monopolizing conversation, to then intervene and try to keep things productive and positive.

     To be clear, and think about how whatever you're saying, whether it is rules or decision about moderating content will be received by the community and how that might change expectations going forward.  There's top level kind of things like that.

     But I guess the other, it's related somewhat to what Jochai was saying earlier, communicating expectations and having a code of conduct being part of that.  We don't really -- it was your question, too, a recipe, right?  We don't say this is what your code of conduct should look like.  We help maintainers think about, provide different models, even a template, templates for them toll think about the scope, who is this going to apply to, where will it apply without telling you too much about the nuances of open source projects, there are different ways that people can contribute within the project.  So does it apply to just the comments or reply to the -- these kind of issues, where within a project are we able to do XYZ.  What happens if there are violations?  How can people report violations?  Who handles the violations?  Can I report them privately?  Can I report the person who handles the violations?  Is there somebody else who can take the reports if it's about that person?  Helping them to think about the various things that might come up.  I think all of that is applicable to other potentially to other platforms.

     My last point before I concede to other people, if that's okay.  I was thinking about how this would scale very, very broadly.  I mentioned earlier that theoretically you could have a project of thousands or even millions of contributors on GitHub, but you don't.  Usually it's a very small community relatively speaking to what level of scale we see on platforms like Facebook, which we keep mentioning.  I'll just do that too.

     For them, I don't know.  If there are ways to have communities within the world of users on Facebook that sort of are the front lines in the way that Wikimedia does, I don't know exactly how to -- where there are things that could work across the board.  That's an example to me that seems like if you take some elements of what you are doing on Wikimedia and having there be this way to fragment a little bit the global user base into something a little bit more manageable, then maybe there are some commonalities among the people in that community that can help, even if it is determining something like the code of conduct, what sort of norms would fit those communities that we can all get behind for these kinds of pages on Facebook or something.  I don't know.

     >> JOCHAI BEN-AVIE:  I think there's an interesting thought experiment.  What would happen if Facebook took a more open sort of contributor model to developing policies around pages, right?  Or around who, around following policies, around how you show up in the news feed.  And sort of an interesting sort of counter factual thought exercise.  What would that look like?  How would you build the governance model for that?  How would that play out?

     I think trying to loop this back to some of the things you were saying earlier, Jan, the broader context that we're in, there's a lot of pressure on platforms to move quickly. You can choose.  You have fast, quality, cheap, you can pick two.

     (Laughter.)

     >> JOCHAI BEN-AVIE:  And certainly speaking, our processes take awhile.  I think we end up in a good place because of that, but I think we can identify a lot of best practices.  We can identify general themes.  Things you ought to do.  But where you fall, which two of those you want you are going to end up making different choices.

     >> JULIET NANFUKA:  A small comment, maybe a question, I'm not too sure yet what is it.  I like the GitHub case.  It is a structured place.  It is very specific in terms of the audience that it is dealing with.  It is open source.

     Now, one of the things we do at CIPESA is localisation exercises around security tools.  Quite a bit of the feedback we get is that some of the tools are not responsive to the needs of communities in the Global South.  And that always makes me wonder, who makes up the GitHub community.  Other people from the Global South, are they not raising some of the issues?  That's the dynamic I've always wondered about.  It also plays out in other spaces as well.

     But in this instance it is a smaller space, a very specific community.  But what happens with the flow of information?  What is taken on board?  What isn't?  What is the process?  Maybe I'm putting you on the spot.

     >> ABBY VOLLMER:  Not at all.  Really, we leave it up to the communities themselves to determine that.  So the platform enables people to, I guess describe their project in a way and kind of build the environment in a way that will hopefully attract very diverse communities.  We provide plenty of recommendations on how to do things.  Ways that you can be, create a welcoming document, the open source guide has a bunch of things like creating a welcoming community.  There is content in there to try to generate that, but we ourselves leave it to the communities to determine for their particular project what they are looking for.

     Within those communities, the nature of open source, if you are using, doing it publicly -- you know, Japanese are building software through an open source platform, is that things are presented in a way that communities can edit, propose edits and comment on what is there and so things like a code of conduct and the community -- even the community guidelines that GitHub puts out there, we put those out to the community which is all of our users, 31 million users, I think, who can provide comments to us and say I think that this is unclear or this should also, you know, elaborate on another situation or something.  And we take that into account and we will revise our guidelines and for communities themselves, that's something that we recommend that they do as situations arise and you see problems occur based on what your current code of conduct is, revise the code so that you are trying to anticipate future situations.  I don't know if that is not answering questions but throwing things in there that are generally relevant.

     >> I'm Mark Nelson from the National Centre of Media systems.  I want to follow up on that conversation.  For, you know, Developing Countries and countries in a state of fragility where ethnic violence is taking place and where online communities are the source of much of that.  Is there a role for regulation in this?  You mentioned that the very top level is laws.  I wonder, are there incentives that could be put in place by intelligent government that would make more self-regulation likely to happen?  Obviously that is not an incentive system we have right now.  The kind of things you are talking about are very rare in online space at a global level.

     So you know, finding a way to reduce this miss governance that is going on in the online space around the world is really a critical issue.  And regulators -- I was just in Ghana talking to the national regulator of media there.  He says, you know, this is really, really a huge problem for our societies.  We don't know what to do.  How do we do this?

     Because there is no way for us to intervene in this way.  Of course, they come up with really very bad policies in the absence of having good ideas.

     Do you have ideas about what those laws or regulations might be that would be positive incentives for this kind of behavior that you are promoting?

     >> ABBY VOLLMER:  I can do it, but do you want to?

     >> JAN GERLACH:  Go ahead, Abby.

     >> ABBY VOLLMER:  I don't have an exact answer for that one either, but I think we are in a place in the digital age where I think there is some level of regulation that is coming/here, we're in it.  Part of my job -- I know others in this room -- is working with regulators on technology where you are finding various levels of familiarity with technology.  So I think that creates both an opportunity for people who understand the technology and understand the law to make effective, good laws.  It also sometimes will result in laws that are maybe well intentioned but actually those words aren't achieving what people intend or what they are hoping for and technology often out peaces law.  So if you try to be too create in your law, too specific, that can quickly become outdated or loopholes can subvert what people are trying to do.

     Some of level of regulation makes sense.  This is where I think my point earlier was if we can help at the most local level of action, the community level, if we can make that effective, then we help prevent the need for so much regulation, especially at the level that is distant from the users who are creating the content.  So yeah.

     >> JAN GERLACH:  I kind of want to take a step back here and think about how, what you actually mentioned here, how room for this, or even incentivizing such self-governance, through laws or regulation can play out.

     At UNESCO, Xianhong, you are involved in the development of the universality framework.  I wonder whether community self-governance, the room for it, encouraging it somehow plays into that framework and how at UNESCO you look at that.

     >> XIANHONG HU:  Thank you for this question.  That also came to my mind.  I should start from commenting on the challenges I see in the room can be quite limited because whether you could -- I know the techie community and the knowledge-based platform have already gone quite far in having this community-based governance.

     But you should also be aware why it can go so far because your platform is operating in a very favorable conducive national regulatory context.  It is, it won't be easy to duplicate in other contexts.  There are so many different kind of platforms who are operating different national contexts.  We had research before to look at the Internet, intermediaries liability issue.  I mean, one result is that, those platforms, whether they are liable for the content they are hosting really to a large extent depends on the national, legal, and regulatory framework.  That is something you wouldn't be easily changing to make them very much facilitating this community governance.  That is one challenge for having more room.

     And another, related to this, I would like to look at it through the UNESCO lens of the universality framework.  We are advocating norms which should be applied to the national, legal, and regulatory framework.  For example, human rights-based.  If we have this national context to have environments very much human rights friendly, whatever platform operates, you respect the basic rights, free expression of media, of individuals.  Also in terms of online content should we also think about the privacy?  I mean, if it concerns others' personal information, it is also another issue quite closely related to that.  It is more complex in a rights aspect to consider in this community governance as well.

     And also drawing from what we have advocated in the universality principles of the Internet, I would say to allow more room for this community, I would see the media literacy can be one ingredient to fit into your recipe.  Because to look at users, I mean the community you are talking about, you might not have very constructive environment.  On the other hand if you look at the community, it can be very fragmented along many, many borders, along language and even national borders and along the culture.  As we just discussed, they won't so easily come up into a community to endorse guidelines and then to take action.

     But on the other side you see why the countries, states are so driven to regulate.  Because they also are many substantial challenges, for example, to defend the integrity of elections.  The algorithm according so much echo chamber on so many issues, challenges to tackle.

     This community would be inclusive and also eventually can take effective action to tackle that.  It also will be a challenge for you to have room for that.  I also feel that in the digital age I really became a content producer, they need a capacity.  There is such a gap.  We teach children in school.  We should educate adults and officers, everybody to have a new type of literacy to handle complex procedures, and critical information, eventually they can be part of this community to govern, self-govern the content.

     >> JAN GERLACH:  As you pointed out and I did in the framing of this session, there are a lot of things to fix on the Internet.  Literacy is obviously a part of that.

     But I think also going back to the theme of trust, trust also means that you need to give users agency, right?  That you need to empower them and enable them.

     So they can actually help fix platforms, fix the Internet.

     Juliet, how do you think or how do you in this space you observe, how does user agency play out in different cohorts?  Do you see patterns how communities actually start to work when they are enabled?  Or is there a pattern why they breakdown when they don't feel agency?

     >> JULIET NANFUKA:  I think the opportunity for agency is still threatened in many countries that we work in.  As has been point the out, the current laws and regulations really punch at that sense of agency.  We have laws that criminalised literally content beast posted on line.  In a country like Tanzania you have to pay to have a blog.  Egypt decided if you have more than 5,000 followers you will be treated as a media house.  The history of media is not great in terms of how the state interact the with them.  In Uganda they have an online regulation law, you need to be registered with the state, give out your personal information, who is involved with it, that brings around issues around data privacy.  We do not have a data privacy law.

     You have the idea of access to social media.  We keep repeatedly punching on the online agency that people otherwise have, excitement they otherwise have to be part of communities to generate content without the fear of being hunted down for whatever it is that they have posted.

     So even though the idea may be there, it is tightly followed with a sense of fear.  Which plays out even in existing spaces.  Why are you posting this?  Take it down immediately, the police are going to come after you.  I do not want to be part of this group.  This sense of agency is not as strong as it otherwise is in other parts of the world.

     So as long as we maintain these laws and regulations which perpetuate a sense of fear, perpetuate self-censorship, we are going to maintain the struggle when it comes to organising communities online.  That is not to say it is not happening.  It could otherwise be happening on a much bigger scale.

     When we see a platform like Femme Forums in Tanzania.  It is an online news organization.  Users generate the content.  There is also a whistle blower's paradise.  If you hear something, go right thereon.  The owners came after it demanding information about who the people on that site are.  Yet it has been playing a very vital role in the freedom of expression landscape in the country.  And now it is also one of those sites that has to pay to remain online, but why should it?

     We see a lot of affronts to the opportunity that would otherwise enable a replication of Wikimedia, GitHub, and some of the countries in Global South or at least in east Africa.

     >> JAN GERLACH:  Maybe we have a couple more minutes.  I just saw yesterday that I think Austria is proposing a law for users to have clear names.  Yeah, on social media.  Anonymity and pseudonymity are important parts of being able to meaning fully engage in a commune.  I think I want to throw it back to you, Jochai, here.  How does the ability to speak freely under a pseudonym anonymously play out in the communities you that you watch?

     >> JOCHAI BEN-AVIE:  I think looking at the world, probably we know that the capacity to speak honestly and sued ominously is critical to free expression.  There are many parts of the world, how would you -- using your real name online will get you killed, has gotten people killed, detained, arrested, in some cases tortured.

     Yeah, this is not an academic exercise.  So I think we need to sort of look at that.  We are a far cry away from the Internet being a safe space for all to speak freely online.  That comes in a lot of different flavors.  Part of that is the tactics of countries that do not uphold the obligations of human rights that I think those of us in the room probably share.  And are established in international law.

     I think we see other threats that are perhaps not quite as pointed in terms of authoritarianism.  I think we know that there is an epidemic of harassment online.  I think we know that many people cannot speak freely.  And these are real challenges.  They are affecting our discourse.  They are affecting our society.  I don't think that -- I think sometimes in an effort to try to do something in recognizing those problems it is tempting to say:  Well, go fix this problem.  We are requiring that you fix this kind of issue in an hour.  Or in 24 hours.  And that is incredibly challenging.  That kind of legal mandate doesn't solve the problem in and of itself.

     So I think we need, there's a lot of different layers of this.  It is important to disambiguate problems.  You and I were chatting in the hallway yesterday or something, there are meaningful differences when we are trying to deal with online harassment versus hate speech versus terrorist content versus the sort of tactics of regimes from varying degrees of objectives, from authoritarian to trying to maintain order in their communities.

     We need to be very specific and embrace some of that nuance.  I think sort of three questions that all sort of -- that I'll leave.  From the user's perspective, what would it take to reach someone with power?  You know, when you see content that either is harming you or that you think is harp full in general, what are the steps that you have to go through?  What does it take to reach someone who actually has the power to do something about it?

     This is a question, for the people who flag things incorrectly or do not comport with your understanding of guidelines in terms of service and the law itself, what education are you giving to those people back to say like this was not a good flag.  And how do you sort of help to raise the level of understanding within the ecosystem that way.

     And you said that a lot of the trouble comes from online communities and online discussions.  I think that online discussions reflect our society, reflect societal divisions.  And there is a lot of area, things that, bad behavior that we see online that does rise to the level of crime, but we don't see a whole lot of prosecution for online crimes.

     So yes, it is tempting to go tell platforms go fix this.  But I think that there are other areas where we need partners in the space.  This takes a whole village of folks to try to make the Internet a safer, more secure, more open place.

     >> JAN GERLACH:  Literally a community, right?

     >> JOCHAI BEN-AVIE:  See, I just served that one up for you?

     >> JAN GERLACH:  I'm impressed, this worked really well.  We're almost at time.  I want to go through the room.  Are there any questions or comments you want to share?  Otherwise I'll give the mic to Anna to do a quick wrap up of conclusions.  Anybody?  Yeah, it's early, okay.

     No?  Good.

     >> ANNA MAZGAL:  Yes.  Thank you, Jochai.  I think you did half of the wrap up for us.  Just to quickly bring the key points because I think we talked about sort of all levels of where that conditions should happen so that we really can ensure that communities really have power, really have governance.

     First of all, we talked about the preparedness of the communities to do that, how they self-educate, how they enter discussed, how they line out their rules of engagement so that they are inclusive and open.  We had different models from GitHub, through Wikimedia to also Mozilla to attest to take.

     On the other hand there is the question of preparedness of platforms.  How are they prepared to actually assist those processes?  And we saw clearly that the effect, the good effect of their involvement is where they provide guidelines and support even in terms of providing templates and some sort of education to the users so that they can understand better what are the conditions and how to not self-censor and self-police, but to really constructively moderate content so it is not unsafe, but on the other hand courageous and real to people's hearts.

     Basically the issue of empowering users.

     We also saw a bit of a tension between that empowerment and the fact that we want to talk about governance.  But also we brought up the fact that platforms are private spaces.  They are privately owned very online communities.  And the tension is between how they want to set out in a way, in an authoritarian way the terms of conditions of engagement with content and with other users.  And to what extent they actually delegate some of those risks to the users and to what extent they actually can allow the governance.

     We were wondering whether the models as we talked about are really adaptable from platform to platform and what they require.

     And on top of that, we, of course, have governments that have different ideas.  So for global platforms this is also a challenge to meet all the requirements in all different geographies.  But also that are not sometimes very well prepared to deal with those issues.  We spoke about the Burmese example where that failed not only because there were no moderators that could actually engage with content in the language but also because of the government's inability or even assistance to the safe drive.  That happened also in real life.

     Also we talked about the fact that the governments want to attack the issues at their end sort of.  For example, they try to have anonymity which we know is important in many parts of the world for individuals to engage with even other or fix laws, laws take fix systems.  One of the challenges is on a level where we can actually look globally on those issues sup as the guidelines of the UNESCO and multi-stakeholder environments, how we can meaning fully set out ideas that attack the issues.  Also those that come out from the way the platforms operate and how they want to deal with liability and not only at the end where the users are and where those can potentially backfire.

     >> JAN GERLACH:  Wonderful, thank you.  We are at time, everybody.  Thanks so much for joining us this morning for this conversation.  Thanks to my speakers and to my co-organisers.  Have a wonderful rest of the day.  Thank you.