IGF 2023 – Day 3 – Upholding Human Rights in the Digital Age: Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All – RAW

The following are the outputs of the captioning taken during an IGF intervention. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid, but should not be treated as an authoritative record.

***

 

>> PEGGY HICKS: Hello, everyone.  Thank you so much for joining us for this main session on Human Rights, Upholding Human Rights in the Digital Age:  Fostering a Multistakeholder Approach for Safeguarding Human Dignity and Freedom for All.

My name is Peggy Hicks.  I am Director of Thematic Engagement, Special Procedures, and Right to Development Division at the UN Human Rights Office in Geneva. 

We have gathered here an amazing panel.  So, I won't take too long at the start.  But maybe just to make a couple of framing comments about why we have thought this was a really important conversation to be having at IGF this year.

It starts from the idea that the human rights framework is not just a legal obligation, which it is, but it's also an incredibly useful tool that needs to be brought into conversations at the Internet Governance Forum and everywhere else where the issues around the internet digital technology and artificial intelligence are being discussed.

We see this human rights framework for what it is.  It's a universal document that's been a set of documents has been agreed across contexts, across continents, and it provides an enormous amount of resource and material that can help guide some of the tough issues that I have heard in the many sessions we have been a part of in Kyoto.

We are looking for ways to make sure that that is available to ground some of these conversations.  It brings in, of course, the ethical conversations.  We, of course, are often brought back to the ethics and values language.  But we think the human rights framework is a reflection of our ethics and values and gives us a place that we are able to work across all the different stakeholders and contexts in an effective way.

I also wanted to emphasize at the start how much this is a global conversation and how difficult it is sometimes to make sure that that's reflected in reality, as well as in sentiment.  We still see that discussions on some of these issues tend to be dominated by certain regions and certain sectors, and that we don't have enough of the voices of those who are going to be directly affected and are being directly affected by digital technology in the room and the human rights framework, I think, helps us to make sure that we are listening to the voices of those who are most affected by digital technologies.

Finally, I wanted to mention that the panel also is, we have asked our panelists to give us a sense of what their expectations are for the Internet Governance Forum and for the Global Digital Compact that my boss, the Secretary General has been working on.  And how to really advance those conversations from a human rights perspective.

So, we are going to be looking at, for example, concepts like how do we develop a better evidence base for the work that we need to do in the digital sector and on artificial intelligence, for example, and the need for us to have better monitoring and observatories and data that will help us look at these issues.  And, of course, coming back to the framing of this session, the importance of a multistakeholder perspective.

And the multistakeholder perspective, I have to emphasize, is one that provides not just token participation, but, actually, meaningful engagement from all communities.  And one of the things we keep coming back to is that it's not just enough to open the doors.  It's also important for the resources to be there to allow that to happen.  An example of that, for example, is the need for some of the researchers that are going to have access to some of the technologies that we want investigated, will they have the computing capacity to be able to do that work.  Do they have the resources to be able to do what we need them to do as researchers and academics in this system.

Those are some of the questions that will frame the conversation we are about to have.  As I said, we are very fortunate to have with us an incredible panel today.  I will introduce each of them as we go forward.  We are going to start with some initial remarks from each of the panelists.  And then we will move quickly, I hope, after that, into a question‑and‑answer with some time at the end, I hope, for us to come in with some final comments.

So, with all that in mind, I am going to turn to our first speaker, who, in fact, we have two of our speakers are going to be online, and our first speaker is Dr. Cameran Hooshang Ashraf, who is the human rights lead at the Wikimedia foundation and central European university department of public policy.  We are very fortunate to have Cameran with us and we will turn to him online now.

>> CAMERAN HOOSHANG ASHRAF: Can everybody hear me okay?

>> PEGGY HICKS: Yes, we can.

>> CAMERAN HOOSHANG ASHRAF: Wonderful.  Thank you.  I want to thank the conveners and the organizers, the Moderators and the rapporteur and all the panelists here.  I wish I could be with you, and, of course, the panelists online and everybody who is here or is watching online.

My name is Cameran Ashraf, I lead the human rights time at the Wikimedia foundation.  Which supports Wikipedia and other free knowledge.  We provide the technical infrastructure and support hundreds of thousands of volunteers around the world who contribute to Wikipedia.  I am also an Assistant Professor of new media and global communications in the department of public policy at central European University in Vienna, Austria.

And the subject of this panel broadly speaking was safeguarding human dignity and freedom for all in the digital age and I am personally appreciative of this choice of wording as I feel that a strong belief in human dignity is why many of us are here at the IGF or why we work in technology.  It's a complex, contested and comparatively underdiscussed topic within the tech and human rights field.

And I think part of the problem and part of the challenge to understanding dignity online is that we, actually, have yet to agree on what human dignity is offline.  How a person is treated, how they are respected varies wildly by geography.  Borders can make a humongous difference, which demonstrates to me at least that conceptions of dignity are in flux and always have been.

This question of the dignity of the individual and of the individual's place in society with regards to technology, I think, is likely to be one of the great salient issues of this decade.  We are already asking these questions, you know, when we ponder how AI might infringe upon our dignity, what AI might do.  And also when we think about where is human dignity and internet censorship and surveillance?  What about companies who derive their profits from tracking us without our consent?  Or predictive content based on inferred emotional states.  What about the digital divide, people not having access to the internet?  How are the elderly treated online.  What happens to our collective discourse when it's poisoned by misinformation.

The core, to me, and with all these questions and a lot more that we could spend all day on is a question of dignity, which is something I think few of us can define or even begin to articulate.

I think really human dignity is something that we just have a sense of, perhaps, an intuition.  But it's not something that we can, actually, just look on Wikipedia and conclude the discussion.  And while Wikipedia won't settle this discussion on human dignity, I believe this Wikipedia itself is premised upon dignity.  To me, it's the idea that everyone, everywhere has something to contribute.  And importantly, that what they contribute is not for sale, it's not for the exploited, and that the individuals who create this knowledge are free to develop their own approaches towards managing the knowledge that they are stewards of.

In other words, there is no interference.  Yes, Wikipedia is an encyclopedia.  It's not a social media platform.  It's not an opinion page and volunteers collaborate, debate, deliberate, argue, discuss their edits and curate the world's knowledge.  They provide citations and sources.

>>>They weigh multiple perspectives so they can make good faith decisions about content together.  They really do embrace the spirit of collaboration across national borders to provide the most accurate information possible for the world.  Dive into Wikipedia in any language.  It's really I think very humbling experience to see how much people have created, not for profit, and because they want to, because they care.

These volunteers set and enforce rules for what does and doesn't belong on the projects, guided by universal code of conduct, which is supported by the foundations genuinely firm commitment to human rights standards and I think across the foundation, all of our staff and our volunteers a belief in the dignity of the individual to contribute to the world's knowledge freely.

I look forward to this panel and to expanding and discussing this topic today with both the panelists and the audience.  Thank you.

>> PEGGY HICKS: Thanks very much, Cameran.  I think you have started off on a really important note.  Really grounding the discussion in those concepts of human dignity and raising the issues that we all know need to be part of the discussion with regards to surveillance, digital divide, the impact on vulnerable communities.  These are all things that we are looking for to be part of conversations here at the IGF and in the policymaking bodies across the globe.  But I guess one of our challenges is, how effectively are we bringing that forward and with that question in mind, I will turn to our next speaker, Dr. Eileen Donahoe, who is the special envoy and coordinator for digital freedom at the U.S. State Department, formerly known to those of us in Geneva as the U.S. ambassador to the Human Rights Council there.  Looking forward to hearing your thoughts on this, Eileen, please.

>> EILEEN DONAHOE: Thanks so much.  It is great to be back in the IGF community.  There is tremendous energy this year.  I think of myself as one of those strange multistakeholder animals.  I have been in different sectors working on human rights and technology issues for a long time.  I was in civil society, actually, with Peggy at Human Rights Watch.  I also was recently at Stanford for the past eight years or so where I ran a centre called the the global digital policy incubator and we really focused on the implications of tech for democracy and human rights, and as Peggy said, I'm now back in the U.S. government as Special Envoy for digital freedom.  And the way I see my mission, the top‑line mission is to elevate human rights throughout U.S. cyber and digital policy, but also to elevate it internationally in all of the technology conversations.

I have identified in my very early days three priorities, and tremendous overlap with the agenda here at the IGF.

The first of which is international AI governance, where the goal is, as Peggy said, it's to solidify the status of international human rights law as the foundation for governance of AI.  And I think of that human rights framework as peculiarly well suited to governance of AI because if you think about all of the risks and implications of AI that people are concerned about starting with privacy, equal protection, nondiscrimination, concerns about ramped‑up surveillance, freedom of assembly and association, freedom of movement, freedom of expression, implications for the information realm, all of those are human rights considerations.  There's also the other side of the equation, which is inclusion in the benefits of AI.

And then the other part I have a little bit of a philosophical streak.  I feel like AI is raising these existential questions about the centrality of the human person in the future of society, future of governance as the focal point, and for all those reasons, I think the international human rights framework really speaks to the challenges.  I will also note, unlike any other normative frameworks that I'm aware of, if does have the status of international law, it is universally applicable, it's a shared language across the globe, and for all of those reasons, I think it is just very well suited for international AI governance.

The big move I would like to make there with everybody here is many of you will recall in 2012 there was the first UN resolution on internet freedom, and it laid down that foundational idea of human rights being applicable online as offline, you know, back in the days where we, actually, saw these as different realms.  And now everything has collapsed together.

I think the same move has to be made with respect to internal AI governance because we see this proliferation of risk management frameworks and ethical guidelines, and they sometimes use the same language and mean different things, sometimes they use different language, and my observation is that many of the people involved in crafting these frameworks, super well intentioned, very knowledgeable, understanding the technology, but underexposed to the international human rights law framework.

So, I think that is really the job of this community, to advocate for this framework and have it be the foundation upon which risk management frameworks can be built.

Second big priority is digital inclusion, in the multidimensional sense.  It, obviously, starts with basic connectivity for everyone and I'm sure this community knows well something like 2.6 billion people on the planet who are still unconnected and that is a priority.  But meaningful inclusion is multidimensional.  We have to stay focused on inclusion in data, which goes directly to equal protection, nondiscrimination, inclusion in content creation, like Wikimedia, inclusion in the coding community, in the governance community, and especially maybe inclusion in decisions about application of AI.

So, for all those reasons, I feel like this multidimensional concept of digital inclusion, we have to remember, it isn't only basic connectivity.  It's all those things.

Last point is, of those 2.6 billion who are unconnected, women and girls make up the majority and that is a really underappreciated fact.  It was really brought home to me on day minus 1 here, really amazing event.  And we talk about the gender divide and we talk about the digital divide but they are really ultimately one and the same thing.  And we have to elevate that.

And I feel like women and girls are also less likely to be included in all those other layers in the data, in the content creation, in digital literacy programmes.  So, I really think we have to underscore the gender piece.

Last is information integrity, which is really challenging subject because we always have to take care not to undermine freedom of expression when we are seeking to stop the erosion of integrity in the information realm.  And I don't think anybody has quite figured out how to do that yet.  But it has to be prioritized, and I think governments around the world, civil society actors, everybody is getting much more engaged on the practical dimensions of how we do that.

I will mention, we just were involved in the Canadian‑dutch global declaration on information integrity at UNGA and really pleased to support that initiative and I think it's got great content.  And I think it can really serve as a basis for conversation going forward.

Last, I want to say, it's a really significant moment in the IGF lifecycle.  And I think everybody here is comparatively more sophisticated about multistakeholder governance.  But we have to all raise the bar even further.  And what does that mean in different layers of the stack and at different sectors, et cetera?  I don't think that's fully flushed out.

And last point is, this panel is framed around two things.  One is the substance of human rights, human dignity in the digital context, and the other one is the multistakeholder processes and how do we advance them.  Those are not really separate.  I think we have to remember process impacts substance.  And multistakeholder process is how we protect human rights.

So, I will stop there.

>> PEGGY HICKS: Great, Eileen.  All sorts of wonderful points there that we need to bring in.  I really appreciated what you had to say about the multiplicity of frameworks as well, and the extent to which the human rights framework can ground those other initiatives.  It's not that we have to move away from them.  But we need a common framing that pulls them together in a way that makes it so that we are not spread across too many different places.

Your point about gender, I have to say, I have been here a day and a half, and I haven't heard nearly as often as I would like, so I am very impressed that you made that.

And the points on the way that we move forward on digital inclusion, I think are crucial and I'm hoping we will come back to them in some of our other speakers.

I will turn now to Professor Peter Kirchschlaeger, director of the institute of social ethics at the University of Lucerne.  We shared a panel before and we are back here again, Peter.  So please, give us your thoughts.

>> PETER KIRCHSCHLAEGER: Thank you so much, Peggy.  Also wish to thank the organizers for having me on this panel and being an ethics professor focusing on ethics of AI, I need to pick up a point Peggy was mentioning before, starting to build a bridge between human rights and ethics because I think from an ethics of human rights perspective, it is, actually, crucial to start with a minimum standard, being human rights, allowing people to survive and allowing people to live a life with human dignity being very much informative, I think, for, basically, the entire value chain of AI.

And, you know, interestingly enough we can also observe certain kind of converging ideas when we look at the different process, I mean look at the results of the IGF so far, look at the high‑level advisory board on multilateralism, more effective multilateralism.  But I want to also recognize the consultations in the framework of Global Digital Compact, but also policy briefs by the Secretary General but all the statements by the Secretary General in the UN Security Council this summer, but also the latest resolution in the UN Human Rights Council and also statements by the UN High Commissioner of human rights, converging ideas in the sense that, first, it is clear that we need a regulatory framework, and secondly, it is also clear that this regulatory framework needs to be human rights based.  And thirdly, there's also convergent idea that we need at the UN institutions, some body, some entity taking care of the enforcement, the implementation of this regulatory framework.

And I think this reality that we have, this converging ideas, coming together on these focal points, I think is something which we need also to celebrate, that we achieved already now in the discourse about so‑called AI, this consensus in these three ‑‑ with these three characteristics.

I would like to add for an ethics of human rights perspective, using the human rights‑based approach for the regulatory framework is also good news for technology.  Because it's not overburdened technology with some higher ethos.  I mean, human rights is really the minimum standard from an ethical point of view.

And, secondly, human rights are also able to not only protect, but also foster and promote diversity, and by that, also able to promote and foster innovation by allowing people to think out of the box, by allowing people to freely express their opinion, by allowing people to have access to all the information which is there, which, of course, is crucial for being innovative.

And secondly, regarding the agency, that there is a need for UN body taking care of this existential issue of AI and how we use AI on the global level, including the huge positive potential AI can have for us as humanity and for the planet, but also including looking more precisely also on the ethical risks it poses.

This agency could have and based on my research, a multiyear research project I started at the university and finalized at the University of Lucerne, one proposal could be to follow the model.  International atomic energy agency, because I would argue, and that's as a proposal, as a suggestion for further thoughts, that both nuclear technologies and the AI share a dual nature.  Both of them having an ethical positive potential but also an ethically negative potential.  And in the field of nuclear technology I'm simplifying now very much, we, basically, did research first, then we created the atomic bomb.  Unfortunately, we used the bomb several times.  And then we realized as humanity that we need to do something about it.  And we created the international atomic energy agency at the UN, basically, taking care of the fact that we can't avoid diverse.  And even I'm not that naive in acknowledging, of course, that's not perfect.  It has its geopolitical implications, but still we need to acknowledge that the international atomic energy agency was able to avoid diverse.

I would think in similar terms, I would suggest to follow the model of international atomic agency also in the field of AI creating an international database systems database IDA.  First on identifying the ethically positive opportunities which AI is offering to us.

Secondly identifying the ethical risks, but also, thirdly enhancing international cooperation, collaboration, technical exchange, technological exchange, what we also see, you know how fruitful that can be also here at the IGF.

And, of course, also being, you know, benefiting from all the different initiatives in this field, bringing them together, combining also multistakeholder approach with a multilateral approach because both of them have their advantages and disadvantages now from an ethical point of view.  So bringing that together in making sure that at the end of the day, all humans can benefit from AI and also the planet can benefit from AI.

Thank you so much.

>> PEGGY HICKS: Thanks very much, Peter.  I think you did something not that easy, which is you gave us an optimistic perspective of where we are.  We have consensus around the needs, in a way, in the broad sense, the pathway is there, but how we get there is the critical question. 

And I think your comments, as well as Eileen's on information integrity point to the fact that we need to now take that conversation to the next level.  We have identified these complex issues.  We have recognized there's no civil bullet, there's no switch that we can flip that is going to solve these issues.  But we need to really look deeper at the different needs.

And, for example, on the broader AI question, I think, the compelling idea that we can't expect any one institution to, actually, necessarily accomplish everything.  But the idea, as you put forward, of observatory or really having an authoritative body that can at least give us some of the evidence base and the monitoring that we need is a step in that direction.

I would like to now turn to Mallory Knodel, the Chief Technology Officer at the centre for democracy and technology, a key partner of ours on these issues and with enormous expertise, so we are really looking forward to hearing Mallory's thoughts.  Please.

>> MALLORY KNODEL: Thanks, thanks very much to the organizers and for inviting me to talk about this.  In fact, I do want to touch on some of the work that we have done with the OHCHR at some point.  My question ‑‑ my response to this is a little bit broaderrer than just what the ‑‑ broader than what the IGF community does.  I think of the whole constellation.  I am looking at the ceiling.  There's a whole constellation of fora out there that governor the internet, that do different things.  We are now including AI, not just the internet anymore.

And they all are meant to come together here at the IGF, where we present, we share, we cross fora analyze all these different issues.  And so when I think about these issues, I think about that broader landscape, although they all end up landing here, don't they?

I want to reaffirm what others have already said, which is that this isn't a panel about ethics.  This is about human rights.  It's the most tangible, useful mechanism we have to talk about the most pressing social issues of our day.  One of my former colleagues always said human rights is governance with teeth.  And she talked about that in her paper on AI.  But I think it applies across the board when we are talking about governing technology.

And so, because this is a really complicated landscape that's only getting more complicated, I think it can be useful to just take a moment to appreciate that and also to analyze what everyone is currently doing and what their relationships are to human rights and what their roles and responsibilities are.

So, we know that states have the obligation.  We know that companies have a responsibility to human rights.  I wonder, though sometimes how we conceive of all the other stakeholder.  For me, having worked in this field for my entire career now, as civil society representative, it is really the vision and the mission of civil society, in essence, why we are here.

But then we have academia, I think plays a really important role.  But where I work most consistently is in the technical community.  And I don't know that we have firmly established what is the technical community's relationship to the human rights framework?  I think we are working on that now and the OHCHR report that is soon to come out touches on maybe ways of mechanizing human rights discussions within the technical community supporting those discussions, but we haven't yet had the philosophical conversation of what is the technical community for or to the human rights framework.

Now, to reflect for a moment, the technical community admittedly is made up of other stakeholder.  It's really industry and states.  You have increasingly civil society members there, but not that much.  And so then there's a question of, okay, if you have states that are obliged to the human rights framework, if you have companies that are responsible for it, why isn't the technical community already talking about human rights?  Why isn't it already baked in?

And I think that it's comfortable maybe to talk more about the technology than it is to talk about the hard problems or to talk about the hard problems in technocratic terms is a comfort zone.  It's also the language of products and of producing and of ‑‑ right?  So, there's just a different way of talking about it.

When I very first started engaging in the technical community, I found myself always having to say human rights things.  But in a totally different way.  I had to create these value trade‑offs.  So, there was one design possibility over here, but then there's another design possibility over here.  Let's just list out the requirements and talk about the trade‑offs, rather than saying, this one is better because it's better for freedom of expression.  Because that wouldn't have gone anywhere.  That wouldn't have helped.

I think we have made progress, right?  I think that now that there are more and more human rights advocates in the technical community, it's becoming easier to talk about, this is the end goal because human rights would be better under this design, but it's still very much has to be, sort of, reconstituted to constituent parts and then put back together.  And that's really important work that we need to keep doing and we need to do more of.  Not because we should turn technical discussions into political theater.  But because, again, what my colleagues have been saying, these are actually the hard problem.  When you characterize the hard problems of technology in their roots, in their real essence of being about people, you, actually, get to the answer quicker and better and that's all we are really trying to do.

While that can be challenging for people, that have only been formally trained in the sciences, it isn't impossible.  It wasn't impossible for physics and chemistry.  It's not impossible for computer science engineers either.

I think I should probably be ending soon with my time.  But I wanted to just really quickly say that substantively, I think one issue that I have seen travels throughout all the fora across all the stars in the internet governance constellation is censorship, and internet resilience.  And I see that as a real, one of our first starting points when we bring together a real end user or people centric issue, that all fora can engage on and every technical community's conversation, every standard has a role to play in figuring out how we ensure the internet stays on, that there's meaningful access everywhere, and so I would invite folks to really consider that as a centerpiece for cross fora engagement on internet issues and human rights.

So, I would just hope that this conversation continues to talk about human rights, not just at the IGF, but in all fora.  And the IGF in perpetuity.  It not that human rights is coming in or out of fashion.  It's a constant vessel for all of the things that should matter to us and we need to make sure that no matter what fora it is, no matter where that forum is hosted or what venue we are in, that we are able to bring these issues up and to tease out the really important things for people everywhere.  That's it.  Thanks.

>> PEGGY HICKS: Great.  Thanks so much, Mallory.  I think you made some fabulous points there.  And really this emphasis on the bridges to the communities that have to be part of the conversation.  And what I liked about what you said is that give us that idea, that people out there are having human rights conversations all the time.  They are just not aware or framing it as a human rights conversation.  But those trade‑offs and those discussions around the impacts of technology in different ways, those are conversations that the technology experts that I have engaged with really want to have.  And I do think making that bridge to help with the framework that can underlie those conversations more substantively will be really helpful.

And your points about censorship and where we need to go on issues around internet shutdowns and the need for an approach that puts people at the centre are crucial.

We are going to turn now to our second online participant.  We are very fortunate to have with us Dr. Marielza Oliveira, the director of UNESCO's communications and informations section division for digital inclusion, policies and transformation, UNESCO is good at long titles, as my office is.  Over to you, Marielza.

>> MARIELZA OLIVERA: Hello, everyone.  It is a great pleasure to be here with you, and I'm sorry I cannot be in Kyoto in person.  We attended all IGFs since the very beginning.  But this year we have a smaller team in presence.

Well, UNESCO is at the front line of seeing the opportunities, but also the barriers that the digital spaces can create, because online data is exactly the free flow of ideas by world and image.

So, for us, we have centered our work on fostering the kind of ethical and human rights‑based framework that can enable digitalization to bring forward a human rights and human dignity, because it's actually not advancing at a fast ‑‑ you know, while it's advancing at a fast pace, it's not really benefiting everyone equally and, actually, quite harming, you know, creating quite a lot of harms.

Our digital era is, kind of, a troubled one.  And we really need to reboot our digital spaces, particularly regrounding it on trust and facts.  So, you know, regaining ground on a vigorous and healthy public debate requires that we really protect information as a public good, defend freedom of expression and access to information everywhere.  But because of scale and reach, particularly online, individuals and societies need to relearn the value of facts and knowledge.  And we need to support a fact‑based, evidence‑based generating bodies such as academia, science institutions, as well as public interest, independent media.  And the IGF as a multistakeholder mechanism, actually, bring quite a lot to this conversation.

We play a leading role in facilitating the international cooperation and shaping a human rights‑based digital future because we, actually, work on strengthening human rights‑based standards and regulatory frameworks under which digital ecosystems evolve.

Last year, for example, we held the internet for trust conference, exactly to look at regulation of internet platforms.  And at the end of this month, we, actually, launching our guidelines that centre this type of regulation, particularly on accountability and responsibility, which is missing in digital ecosystems.

And regulation and standards are how we ensure oversight, protect the public good and encourage also investments that because they actually create level and stable playing fields for innovation to flourish.

We see that all country that are at the forefront of digital transformation are engaging in the digital spaces and technologies, particularly social media and artificial intelligence because they really see the need for that.  But these efforts really need to be complemented by global standards and guidelines that facilitate collaboration between actors and government, private sector, civil society, this multistakeholder approach.

And so we have been setting standards in the area of digital transformation.  Our recommendation and guidance Frenches, transparency of different platforms, open science and open data, ethics of artificial intelligence, ICT competencies for teachers and others, there are instruments that foster innovation while protecting human rights and promoting accountability.  That's how we centre our work.

And a second line of action that we always bring is to strengthen institutions and systems that can enable cooperation and digitalization issues.  It's really important that we develop human and institutional capacities to harness the potentials and address the challenges of digital technologies and platforms.  And we prioritize the capacities of these groups whose decisions and actions have the widest and the deepest impact.

For example, policymakers and civil servants, particularly judicial operators since they have a special role in shaping the environment in which our digital ecosystems are developed, as well as educators who are responsible to impart knowledge in line with 21st Century requirements and young people digital leaders that lead this process globally.

And we value very much the networking and collaboration, knowledge sharing amongst stakeholders and foster ideas for engaging on best practices and on regulation and the conversation around human rights.

But we raise the skills and competences of users of digital technologies and platform.  Media and information literacy being essential to build critical thinking, technical and other skills, knowledge and attitudes that allow us to derive value from digital information ecosystems and avoid the trap they set by misinformation, conspiracy theories, disinformation, hate speech, excitement to violence and others.

And the stakes are really high.  We really need to bring this conversation in a bigger way around the IGF and other digital ecosystems, particularly in this year that we have so many governance changes, for example, you know, with the WSIS+20 process taking shape, starting the Global Digital Compact coming up and other mechanisms such as the AI that is being thought out and so on.  This is the occasion in which we have the chance to do this big reboot.  Thank you.

>> PEGGY HICKS: Thanks very much.  Helpful perspective there and linking back to where Cameran started us off, important information and data, two points crucial to the conversations.  I also liked as well the points that Marielza makes around human capacity, the institutions and structures and the need for us to build up the ability to tackle these issues in a human rights compliant way as well and how we get there.  And I'm sure our other panelists may have thoughts on that front.

There are still two more panelists to go.  I appreciate everybody's patience before we get to the questions and answers.  But I'm going to turn now to Frederick Rawski, who is the head of human rights policy for Asia Pacific at Meta.  And my notes say Frederick you are also a composer of electronic music.  So, I don't know if you will bring that in, but over to you.

>> FREDERICK RAWSKI: Thank you.  How did that fact get in there?  I didn't offer that bit of my biography, but happy to discuss that offline.

It's hard to be six or seventh in line in a conversation it has benefits and downsides.  On the positive side, I have a better sense of where the conversation is going.  But I also have so many notes now to my remarks that I am not sure they are valuable to me anymore.  But I just like to say, I'd like to thank IGF and everyone else for giving us the opportunity and apologies for my voice.  I have lost my voice.

And just to say that I am personally excited about being partly of this panel.  I joined the human rights policy team in Meta last year, in July, after several decades of work in the international civil society space and with the UN, including with the office of High Commissioner of human rights and I am committed in my current role as the head of human rights policy for the Asia Pacific Region and that's part of a larger team, human rights policy team globally to engaging in all of our work in a multistakeholder consultive manner.

But with that background, I came to this space with the perspective of a critical outsider.  And had a fair amount of skepticism coming in about how successful we could be in building a human rights framing and human rights approach to the business.

But I have to say that I have come to appreciate how successful Meta has been in doing this.  There's a long way to go.  But I think a lot of progress has been made, particularly since we adopt our corporate human rights policy in March of 2021.

We try to show this commitment rather than just talking about it.  But just a few things that we have pushed forward with in the last couple of years are building a human rights team itself.  It's relatively new.  Adopting the human rights policy.  Launching the Oversight Board which has adopted human rights as a principle basis for its work.  Creating a human rights defenders fund, committing to protection and privacy against overbroad government demands.  A commitment that we when we joined the global initiative in 2013 which I think is another institution worth talking about.  We published two annual human rights reports.  This is the most recent one and what makes this one interesting is it includes a summary of our enterprisewide salient risk assessment which is something we are continuing to do and publishing the outcomes of that.  And it looks across human rights risks throughout the company, up and down the value chain.

And we continue to publish other forms of due diligence at the country level.  We have done one recently on Israel Palestine and encryption.  We have strengthened our ‑‑ this is included with UN Global Compact.  Our commitments to the unGPs, engagements with the Secretary General's office, UNICEF, UNHCR.  Office of Special Representative for the prevention of genocide, many special rapporteurs and country teams and we have a 20 person, I think, delegation here to IGF which I am proud of, including our president of global policy, just to represent the commitment that the company has to this kind of engagement.

I can already hear the groans from some of my civil society colleagues in the audience.  Okay, yeah, they are listing off all the great things they have done again.  I do want to acknowledge up front that this is not easy.  There are many challenges to integrating human rights standards into company policies.  And making them more than a fig leaf but actually influential in important decisions, even decisive.

I work in the Asia Pacific Region which has an extraordinary amount of linguistic and cultural diversity, fractured regulatory space, vibrant and growing economies and where governments have an inconsistent commitment to democracy and human rights and as you list those off, I realize I am talking about the world and not about the Asia Pacific.

So, it does sometimes feel difficult, if not impossible to live up to the dual commitment that the company makes to both comply with local law in these many different jurisdictions, and to abide and promote international human rights standards at the same time.  That's all to say that we seek expert guidance and we give it where we can.  We want consistent and principle based frameworks that we can't and shouldn't be developing ourselves and for these reasons we strongly support the leadership of the UN in facilitating the global process and improving global cooperation through the GDC and the IGF and other fora.

I will end by mentioning that we recently made a submission of inputs to the GDC and highlighted a number of actions, and some of which include actions that we would take collectively in a consultive manner, urging governments to resist policy that enable the misuse of personal data and unprotected speech.  Supporting the end‑to‑end use encryption.  Offering support capacity building initiatives, inclusive one for public and private sector actors to prevent and react to harmful, malicious and hostile behavior online and this is where my notes get kind of ‑‑ I'm so confused that I can't follow them anymore O that last point around multistakeholder engagement a couple of thoughts came to mind as I was listening to others speak.

One is in my role, in Meta and coming from a civil society, an International Organization perspective, I do see there is still significant gap in understanding across stakeholders.  There is, as has been mentioned, conversations are happening all the time about human rights, but they are not being conducted in human rights terms and human rights language.  And there's still a long way to go to socialize and communicate that human rights framework.

The other challenge there in that gap I think is now that I am living among engineers and software designers and salespeople and all of that, is to translate those principles into action.  There are many examples, robot plan of action.  How do you take that and turn it into language that can be implemented as policy, how do you then take that language and turn it into something that can be applied at scale.  That can be coded, can be understood by engineers, can be understood by people who are promoting the business and other aspects of the business policy.

And the second thing that came to mind is this risk, the concept of risk.  I am constantly talking about risk with people.  And everybody understands this.  There's legal risk, there's business risk.  There's policy risk.  And there's human rights risk.  And I think we are often talking about similar things but we give them different balance.  They have ‑‑ they play different role in the balance of decision‑making, among different parts of the companies and different parts of governments and I think there's ‑‑ I think there's progress that can be made there in both developing a shared vision about what we mean by human rights risks and impacts and especially how they interrelate with these other frameworks for assessing and understanding and mitigating risk that are much more common and, frankly, more widely understood amongst many people in the technical and business community.

So, I will leave it at that.  And thanks again so much for giving us the opportunity to speak.

>> PEGGY HICKS: Thanks, Frederick.  We appreciate the company perspective and your willingness to come and be part of a forum like this and discuss how ‑‑ what Meta is doing and what areas there are still room for improvement.

I think your point about the risk assessment frameworks is a really interesting one that comes up quite a bit for us as well and is an area where we might be able to bring things together a bit better.  Taking Mallory's point about putting people at the centre of it and the human rights analysis that will help us to do that.  Fortunately we have one final panelist who has been patient and we are very much looking forward to his insights, Gbenga Sesan is the Executive Director of the paradigm initiative and a member of the IGF Leadership Panel.  Over to you, Gbenga.

>> GBENGA SESAN: Thank you, Peggy.  We are talking about human rights and multistakeholder processes, and I think it's a good time to, you know, state clearly that we can't have multistakeholder conversations if certain stakeholders can't be on the table.  It's hypocrisy at best, to say that we are having human rights conversations and it's multistakeholder and there are stakeholder that can't be on the table.  Where the IGF 2023 and I keep hearing stories of people who wanted to be here but couldn't make it because of visas and it's not just IGF.  It's the IGFs before now and many global processes.

There are barriers to entry.  And I think we have to address this.  You have no idea how dehumanizing it is for you to stand in front of a Visa officer to defend your existence and expertise.  It is ‑‑ it shouldn't even be a thing.  Because you are going to contribute to conversations and you are not, you know, trying to do something else.

So, I think it's important for us to set that as a conversation, to continue, and that if you are in the IGF, or if you are in the global, if you call it global then it has to be global.  If it's global, it means you have to open your doors to relevant stakeholders.

And I know this is part of a bigger migration debate, but we can't pretend that this is not happening and we are talking about human rights.

Anyway, and speaking of which, when we talk about global processes including the GDC conversation, including the IGF and all of the conversations we will have, one of the important opportunities we also have is that we have data and we have stories on human rights, either human rights violations or human rights defense from civil society organizations that have been working on these issues for a very long time.

And I think it's very important for us to take advantage positively of this information, this data, and be able to improve processes.  And because when we have these conversations, one of the things we must realize is that there are people with lived experiences that we can't ignore.  And this lived experiences will help us understand what the issues are.  We don't need to commission a study, for example, to understand some of the violations that are happening in some of the countries across the world and how to respond to those challenges.

Some of you may be aware that the Leadership Panel, the IGF Leadership Panel presented a paper a few days ago.  Well, not a few days ago.  It feels like it's a long week already.  I think just two days ago, actually.  Presented the internet number 1 paper and the whole idea of that paper is to ask the question, what internet do we have right now, what internet do we have.  What internet do we want.  And what is the gap and how do we do that?

And of course the five overarching areas should be all and open, universal and inclusive, it should be free throwing and trustworthy, safe and secure.  But I want to emphasize a fifth point is it must also be rights respecting.  And by saying that should be rights respecting, I want to just focus on just one tiny area of that.  We talk a lot about 2.6 billion people who are not connected and I want them to be connected.  I say to people that my life story, my career journey was made possible because of one email.  And that is the power of internet.

So, there are 2.6 billion people who are not connected.  But don't forget there are also people who are disconnected.  And I want to emphasize that because we are talking about human rights.  There are people whose government or whose certain activities or situations have rendered them disconnected.  And because they are disconnected, we count them as part of the connected because we are focusing on the unconnected.  Which is a 2.6 billion.  And I think it's really important.  I am glad that the Freedom Online Coalition released a statement this week on international shutdowns.  It is not a conversation we should be having in 2023.  Again it is what it is.  What as it is and where we need to go.

And I think finally, is to say that, you know, global processes are not for aliens, they are for humans.  And so at the centre of the conversation should be humans.  It should be human dignity, human rights and everyone has a role to play.  States have, you know, the obligation already to make sure that implemented human rights principles.  Civil society does advocacy, like was said, technical community needs to bake it in and for the private sector, it is very clear, at least between, you know, the COVID problems we had in 2020 and now, it is very clear, I think, for many businesses that, you know, human rights is good for business.  When people trust your platform better, they are very likely to use it and become advocates for that.

So, I really look forward to, you know, the comments and the questions we will have and the conversations that will continue on this topic of human rights and multistakeholderism and making sure that everyone, everyone that needs to be on the table doesn't face barriers to entry.

>> PEGGY HICKS: Well, that was well worth waiting for, Gbenga.  I think the points that you make are so crucial to the conversation.  I have to pick up the first point, which is about who is in the room.  Because this has been a persistent issue, not just in this conference, but in many conferences we are at where we say we want a global perspective but we are not necessarily able to achieve it.

And I do think there's a fundamental question there about what are we going to put into that and what do all of those in the room want to say to all of the governments that they are engaged with about what you need for this forum to be successful.  Because it's, actually, a disservice to all of us.  Because that idea of participation, it's not opening the doors as a favor to those who want to participate, as you said.  It is a necessity for us to be able to have the experience and knowledge that will allow us to arrive at the right approaches and ideas and insights that we need in a discussion like this.

I have gone on too long.  It's a point I'm passionate about.  But really appreciated what you had to say as well on the internet shutdown point, one that's a recurrent one from last year's IGF, I will point out.

Now we are finished with the statements from our panelists and really looking forward to seeing if there are questions from the audience that we can bring back to the panel.  If you want to come forward to the microphones, please identify yourself, try to keep your comment or question short so we have a chance to bring in as many people as possible.  Don't all line up at once.  I can't see people.  Okay, good.

>> AUDIENCE: Hi, everyone. ‑‑ yes.  My name is Carolyn Takeda, I'm with access now, a global organization working to extend and defend rights of communities at risk and around the world.  Thank you for your comments and especially on the reflections about the importance of meaningful access to these spaces and making sure that the people whose voices need to be heard the most and we are having these conversations can, actually, safely engage in these spaces and I don't think any of these reflections on multistakeholderism can really arrive at the stated goal if those people aren't able to engage.

So, I just want to present the question back to the panel and maybe to you, Gbenga, first to just build on what you have already shared but also Eileen and Frederick would be great to hear from you as well, to what extent the news that we are hearing about the next location for IGF in Saudi Arabia is in any way compatible with what you have outlined here and especially understanding where we are in this cycle for IGF and coming up at the end of the WSIS+20, kind of, understanding what the future of the multistakeholder model looks like, what a move like this means for our ability to actually bring civil society into these spaces safely and meaningfully.  So, I would like to hear what you all have to say.

>> PEGGY HICKS: Thank you.  We will take a couple of questions and come back to the panel.  I think there's somebody over here, please.

>> AUDIENCE: Thank you so much.  First I want to appreciate the conversation on the issue of human rights and human dignity.  My name is mishy Jumanboka I'm a member of parliament from Kenya, and I represent the parliament service commission on the issue of information and public communication.

And I just want to say on the human right perspective, how are you going to address the fear that artificial intelligence can create job loss vis‑a‑vis it's going to create some jobs in terms of research.  I'm just looking on a scenario whereby a job which could be done by 10 researchers can just be implemented by just one application.  So, that is a fear which is across developing countries.

Number two is the issue of cyber bullying which is rampant especially in my country.  And mostly it's targeting vulnerable group of members of the country, like the women politicians.  We have that challenge.  And it is very much rampant.  I don't know how we are going to address it.

My last question is the issue of protection to privacy and personal data.  Recently in our country, there was some guys who came from America and they called themselves what coin, under that ban of what coin they are selecting some personal datas from Kenyan citizens.  It was a big debate because even the authority, the Kenyan government were not aware of what was happening in regards to what is going.

It's like, there's no international regulations.  Anybody maybe can just pop in a certain country and just try to collect some data.  So, this is a scenario where more fear has been spread ‑‑ is being spread to citizen of certain countries and that is why maybe we really need to have some outreach programmes or we need to have programmes of the entire globe so that at least people would understand what does it entail by artificial intelligence because even people think that it can be a threat in terms of internal security.  So, I think there is a need not only having this big forums, internationally, but we have to disseminate the same information in our countries far back to our rural areas whereby up to now, there's still no connectivity, the connectivity is very, very low.  And people doesn't understand what is happening, so when you are talking about artificial intelligence, some people think who is this monster coming, is it like the world coin or taking over data or is it going to be a threat in terms of our internal security.  There is a lot to talk and there is a lot to discuss and engage people globally.  I thank you.

>> PEGGY HICKS: Thank you very much for that perspective.  Three really important points.  I am going to take one more question before we go back to the panel, please.

>> AUDIENCE: Hello.  I'm Emma Gibson from the Alliance for Universal Digital Rights, or AUDR for short.  And thanks, Eileen, for mentioning the event on day 0 minus 1 that we co‑organized, which was around the Global Digital Compact and our launch of these 10 feminist principles for a global, Digital Compact, which you can ask me for a copy and I will give you one afterwards.

It was great to hear some talk about gender, human rights is the first of our principles, the GDC should be based on the human rights law.  But I would love to hear a little bit more from people around the importance of gender specifically in the Global Digital Compact as a cross‑cutting theme.  Thank you.

>> PEGGY HICKS: Thanks very much for those three sets of questions.  We, actually, ended up with five questions.

So, I will come back to the panel.  Maybe Gbenga, since you went last before, you can go first now, please.

>> GBENGA SESAN: Thanks for asking that question.  So, first of all, I don't know how many people were in Addis Ababa last year for the IGF.  I said a few things.  One of things I said, it was pretty embarrassing that a country that shut down the internet was also on the Internet Governance Forum.  That may not have been a diplomatic thing to say, but it's the truth.  Everyone, including Saudi Arabia or anyone else who hosts the IGF needs to understand what the IGF means.  It means it is a forum for conversation around the internet, including principles of human rights.

I believe that apart from speaking to state obligation and civil society advocacy, we should not be worried about surfacing concerns that we have and asking questions to get from, you know, anyone who has stepped forward to offer to host the IGF.

>> PEGGY HICKS: Thank you, Gbenga.

And we also had questions relating to gender, to cyber bullying, to the protection of privacy and data, and the impact of AI and the field of work.  What wants to jump in?  Mallory, you look really.

>> MALLORY KNODEL: I can jump in with answers to two of the questions.  The first one specifically on cyber bullying, I just wanted to highlight a really instructive and informative report that Centre for Democracy and Technology put out about women of color who are politicians in the United States and the experiences they have online, that intersection is a real mess for those folks.  And what that research really highlights is a bunch of different elements that, sort of, all stakeholder have a role to play in this.  I'm not going to outline all the recommendations but you can look.

So, the process by which you actually research the problem, understand it from a nuanced perspective, what's actually going on, that is the kind of thing you have to conduct every time there is a problem like this and it requires the platforms to open up their data to researchers.  It requires fine‑tooth comb when you are going through what the experiences are.  In the high level, the recommendations are things like give users more agency.  They need the ability to block and report.  They need the ability to do that at scale because the attacks against them are often being done at scale.  Things like that.

So, I feel like that's a really, really important question.  It's particularly important for those most affected for gender discrimination, so I think it's a great example of the kinds of things we need to pay attention to when we are talking about real world harms and human rights.  But there are, of course, many, many others.

Second question I wanted to respond to was just about, you know, next year's forum.  Just because I alluded to this in the end of my remarks.  I think that irrespective of the actual location or the host, human rights has to be a huge part of the conversation.  In fact, sometimes I feel like this conference should really just be a human rights and sustainable development conference that we talk about the internet and AI sometimes, right?  That would be more useful and beneficial sometimes rather than creating this about the technology.

So, I want to just say that we have had this happen before.  We have had internet governance meetings every year happen in places with questionable human rights records.  It's happened, for the IGF in particular in Turkiye, 2014 was just after GEZZI park.  Singapore hosted the IETF one year and folks were trying to boycott it because if you were LGBTQ, you were not technically allowed to go to Singapore then.  That was a real problem.  Things like that haven't deterred the conversation from happening.  We have to lean into it and be louder about it.  It's an opportunity to talk about these things in a different way, and so I would challenge all of us to make sure that we do that.

>> PEGGY HICKS: Thanks, Mallory.

Eileen, you want to come in?

>> EILEEN DONAHOE: It's difficult, so many good questions and so many layers to them.  I will start with the two points by Access.  The first one, I have something positive to say.  You talked about the necessity of having civil society in the room for these tech policy conversations and internet governance conversations.

My observation, particularly at this IGF, is that the expertise in the civil society community has skyrocketed.  Even as compared, certainly, to governments in understanding how the technology works and in relation to, let's say, private sector and technologists in terms of exposure to relative to the international rights law framework.

We talk a lot about capacity building to get people in the room, Gbenga, you said that.  I think we also need to think about capacity building for the technology community and governments.  So, that's just a different angle on the same issue.

Saudi Arabia, I mean, I would say it is the responsibility of the IGF community, the Leadership Panel, the MAG, to, as sometimes Mallory said make sure it is square on the agenda and emphasized, not hidden.  And certainly make sure the inclusion piece is ‑‑ that people are paying attention, like who is not being allowed in or included.  Because it might be different.  It might be different.

I am going to also get to the last question about gender and join it with our colleague from Kenya.  This whole conversation is kind of about the tension between consequences for human rights of exclusion, from enjoyment of the technology and the benefits of the technology, and the processes around governing the technology and being in the room.  But that sits intentioned with the risks of inclusion.  And whether they are inherent risks of the technology that were not thought about before deployed, or malign of the technology by authoritarian government for surveillance, censorship, control of the information realm.  And those two things are intentioned.

I think that event on day minus 1 really made a giant impression on me, how the gender piece is at the heart of that.  Both because women are the most excluded from connectivity itself, but also from all the other dimensions of meaningful inclusion to really be participating and benefit.  But also at the heart of some ‑‑ the risks of how the technology is used in ways that are peculiar to women and girls and there's a gender dimension to it.

And I think that both of those sides of the equation, we have to ‑‑ this is why we have to elevate the gender conversation, because if you want to understand the dynamics and the tension between both of those sides that we have to do at the same time, we have to solve for the gender piece.

On the other point by the Kenyan colleague, labour displacement in effect, the consequences of AI for labour.  That is so undertheorized, and so underappreciated and it's ultimately ‑‑ I think it's going to hit us all in every society.  You know, that's one way ‑‑ societies where AI is more embedded are more at risk on the front end of the consequences of that.

And not ‑‑ well, I think there's a whole community of people thinking about that.  But they tend to be economists.  And they otherwise do labour issues.  But I don't think many in the technology community are yet focused on that or even in the human rights community.  So, I appreciate that question.

>> PEGGY HICKS: Thanks, Eileen.  Just a small comment linking what Mallory said about what we really need as a human rights conference that brings in the internet and your comments about the gender dimension, you know, one of the things I am often struck by is the extent to which we try to separate our solve problems online when, in fact, the online world is a reflection of the world that we live in.  And that distinction or separation is never going to be truly successful.

Peter, would you like to come in?

>> PETER KIRCHSCHLAEGER: Yeah, I would like to pick up this point because I think at least I would see huge opportunity that they, actually, can find technology‑based solutions to the gender issues which were raised.

And I think it's not rocket science to find ways to identify gender‑based hate speech.  It's not rocket science to find technology‑based solution to identify cyber bullying.  I think what we are lacking is the will, be it from states, be it from the private sector, to really make that main focus for the next year, rather than striving for more efficiency, just to put it very simply.

And then regarding the impact on the human labour, I couldn't agree more with you on the fact that we really, I think, paying not enough attention to the question what kind of impact and the use of so‑called AI has on human labour.  Kind of seem to pretend this is not really happening and we still have a capitalist free market striving for full occupation, while ‑‑ full employment while, you know, it's actually going in the other direction.  And we have seen now years where we had economic growth with unemployment rates also increasing, which is a new phenomenon, from an economic point of view and I think we have to identify that interesting debate on where we should strive for, ethics can contribute and what does it mean for human to work or not finding paid professional task.  And, of course, other disciplines contributing to trying to find a solution for that.

>> PEGGY HICKS: Thanks very much.  I think there's a real agreement on the need for the field of work issue to be looked at more thoroughly.

Frederick, would you like to come in in?

>> FREDERICK RAWSKI: Thank you.  I will be brief.  I just, I agree with everything everyone has said 100 times over.  Just it struck me particularly in the comment around cyber bullying and the other comment on the centrality of gender or its lack of centrality where it should be central, is part of this framing and translation problem that I was thinking about in the first comment, which is these fora are amazing and they are great for talking about the principles, they are great for bringing stakeholders together.  I find myself as a human rights lawyer thrust into the centre of a giant tech company always needing to make all of it actionable, turn that into things we can do, processes.  Some of them are technical, some of them are not.  Some are policy, some are messaging appropriately to leadership.  But I'm thinking, for instance, in the cyber bullying example, human rights is really where we need to start with that question.  You need to start from the principles and from the common and shared goal, vision that we have for them.

But very quickly I have to get to policy and we have got a very robust policy on bullying and harassment in Meta.  And yet, it needs to be constantly iterated and constantly evolved to take account of particular contexts for country contexts, cultural contexts, language contexts.  And from there, quickly go to finding mitigations and how we land upon them, how we design them, how we implement them very quickly, and just, for instance, we have made adjustments, for instance, to our policies on women public figures and the kind of vulnerabilities that they have and have made adjustments to bullying and harassment policies that would add protections.

The issue around user control, enhancing that user control and transparency about these things so that people can have all the tools that they need to protect themselves.

And then many other complicated issues, language.  Every single issue often comes down to language when you are talking about content.  And bullying and harassment in particular is a space where we are constantly needing to evolve, evolve those policies and that cannot be done without engagement with communities without understanding that we don't have and that is specific to the cultures and communities where the platform works.

So, just, again, thinking forward to the next IGF and other contexts that just it's that next step from the amazing conversations that we have had to finding ways to collectively find specific solutions in some of these contexts.  And, obviously, AI adds a whole other layer to that.

>> PEGGY HICKS: Thanks very much, Frederick.  We have time for a few more questions, if there are people who would like to come to the mics.  I notice that we didn't thoroughly tackle one of the points raised by our Kenyan parliament guest who asked as well about the privacy side and the data side and how we see those issues.

I was in a conversation directly before coming here that really stressed that the theme in this IGF is around AI, but that we need to start seeing the AI challenge as a data issue.  And that, you know, at a minimum, one of the key elements here is transparency and that's the point going back to what Frederick said as well.  But a whole nother topic about the way data protection is a crucial piece of the AI equation.

I see a question over here, please.

>> AUDIENCE: Sorry.  I'm a bit short, obviously.  I wanted to follow up on a question that was raised, because I felt like the response wasn't necessarily sufficient, and I think it, actually, speaks to wider systemic issues.  When we are talking about where we are hosting, whether it's the IGF or other forums in certain contexts, I think the responses at least from what I heard ‑‑ thank you ‑‑ the responses at least from what I heard was more about agenda items that were being raised.  So that will include human rights in the agenda but it feels like the people who have that lived experience were the most affected are being excluded by design from these spaces, where we have well‑document from credible sources the use of technology itself to survey marginalized and vulnerable people.  So, I think we need to talk about that.

And then the other piece about people who are excluded because of the Visa issues.  I feel like this is a repeat problem in a lot of different forums and contexts, whether it's IGF, we had this happen at RightsCon as well and the conversation becomes about how do we guarantee it and just one context of allowing people to come to a conference but not speak to the wider issues of the bordering of the world that's being enhanced by technology where certain groups of people are allowed to move freely but others are reflect and a lot of that is the vestiges of colonialism and not talking about making a key part of our agenda in the IGF decolonizing technology as well because it's reinforcing some of these existing systems that are quite problematic.

We see even within, for example, the African context that people have been prevent from moving within their own continent.  Whereas we have already the mechanisms, systems someplace, for example, the EU model of being able to freely travel within their continent.  We know we can do this.  It's not that difficult in the sense that the models exist but it's only afforded for some populations.  So, I think we need to be speaking about the systemic issues and they are often ignore.  So, I wanted to hear some of that, what are the actual tangible concrete actions that are being taken to address this instead of repeatedly having these come up as talking points in different conferences, including IGF.  Thank you.

>> PEGGY HICKS: Thank you very much.

We have another question over here, I think.

>> AUDIENCE: Hello, everybody.  I am Reynaldo from Brazil.  I am representing the youth from Brazil.  And I would like to quote everything that was said before, because I was thinking exactly like that.  We are from south perspective, from a youth perspective, and I feel that in the last IGFs we had really not sufficient representations of our queer community.  Like, we have just few people that are transgender.  We have sections of debates that does not represent our perspectives because we are daily facing the violence against our communities, not only on the internet, but on site as well.  And I see that we have to face these debates and try to propose these forums to input our perspective to and try to bring more youth perspective and queer perspective to the debate.  Not only the north ones.

And I just wanted to implement this question ‑‑ not question, put commentary and try to bring this way of thinking.  So, in the next ones, we can, like, build something more queer.  Thank you.

>> PEGGY HICKS: Thanks very much.

I don't see any other questions, and I told the panelists that we could come back to them for a final comment.  So, what I think we will do is we will respond to the two final questions or comments here as part of your final remarks.  And I didn't give anybody an order, but I'm thinking about going in reverse order, if that's fair, again, Gbenga.  Do you mind going first?

>> GBENGA SESAN: I don't mind.

So, I think it's important to re‑emphasize that a conversation is not about getting human rights on the conversation in Saudi Arabia as a panel.  That will be tokenism and we are not talking about tokenism.  We are talking about a respect of rights and to be seen as respecting rights.  And I think this is really important because when I spoke about barriers earlier, this is lived experience.  This is not theory for people.  This is ‑‑ there are people who have had experiences that are not just dehumanizing but have also affected spaces they can go to, opportunities they can get and the things they can do.

So, I think it's important to understand that so this is not just about a panel or about getting certain colors of faces on panels or something.  It's about making sure that when we have to have the difficult conversations, we have these difficult conversations regardless of where this is held.

And by the way, this is not just about the next IGF.  This is about continuous IGFs.  This is about continuous global forums.  There are times when we even need to call out countries and platforms that speak the language, but do not respect the rights as they should.  And I think this is really important.

In terms of, you know, representation, this is a conversation that continues and I am glad to know that there are, sort of, tracks and there are panels and all that.  But I think we must realize that as long as we continue to get these questions of, you know, young people are not represented, minority groups are not represented, it means that they are not themes we should ignore.  We should pay attention to them and make sure that things as simple as guidelines of how to organize workshops, that we literally implement this and they become an opportunity.

And I am grateful that we have had this conversation today.  And I hope that it's not just going to end with this panel.  I hope that the conversations will continue in the hallways and even beyond here about what must we do, the states as an obligation, private sector, even it's good for your business.  It's not no longer trying to emotionally blackmail you.  For technical community, you have to build it in and civil society, we must not shy away from speaking truth to anyone, including even our allies.

>> PEGGY HICKS: I'm wishing I had left you for last because that was a great closing statement.  Thank you very much, Gbenga.

Over how, Frederick.

>> FREDERICK RAWSKI: Thank you.  This is my very first IGF and it's my very first time representing Meta at such a forum.  So I will forgo my critique but I have been excited to be here, I've been very, very happy and the company has been very happy at the framing, the centrality of human rights to almost all of the conversations that we have been in.  It was very hard to figure out where I should be because almost every single conversation, every single panel we have been engaged on has touched upon human rights, usually explicitly, but if not, implicitly.  So, I do think it's worth thinking about doubling down on that approach as a few people have suggested.  And from our sigh, I think we would be very pleased with that kind of framing.

At the same time, look, I have been in a lot of conferences in my life on the civil society side and in many other capacities.  Yeah, a lot more can be done to make this a more inclusive process.  A lot more could be done to ensure that the framing of the issues and the issues that we deal with are getting to the heart of the problem, particularly more systemic problems.  And so I think there is work to be done there as well.  But very excited to be part of this and just wanted to end by saying that we get a lot of criticism and rightly so in many cases.  And very happy to spend time talking about the issues that are specific to us.

But part of why Meta brought so many people here and decided to have such a high‑level delegation to the IGF is to message to everybody across all of the stakeholder groups that were committed to this conversation and are ready to move forward and support it in every way we can in the future.  Thanks.

>> PEGGY HICKS: Thanks very much, Frederick.

We wanted to turn to Marielza, if you are online and had a concluding comment.

>> MARIELZA OLIVERA: Thank you very much, Peggy.  I just wanted to make two comments regarding previous questions.  You know, in terms of the issue of online violence that has really become a new front line for journalists, for educators, cultural workers and scientists and particularly for women in these professions.  This is an escalating freedom of expression and access to information crisis.  Because this is really driving away the professionals that actually bring truths and facts to the digital ecosystems.

And this kind of harassment and abuse is a combination of not only threats and misogynistic comments, but digital privacy and security breaches that expose identifying information, exacerbate the offline safety threats for them.

73% of women journalists are, actually, harassed online in a high proportion of those actually suffer attacks offline.  So, this is a thing we need to bring, to discuss.

I wanted also to comment on the labour questions that was asked before.  And we know that jobs will certainly change, you know, through artificial intelligence and we don't know yet what the end effects will be in terms of employment numbers.  But what we see and what really worries me more is that how technology is stripping away some of the labour protections that took decades to construct.  Workers is really precarious worker and there's not really enabled people to realize their right to earn a decent living.  It may also impact consumers negatively.

I saw the other day an article about nurses being hired like Ubers and this has terrible consequences for the health of their own patients and for the nurses themselves, for their own mental health.  And we need to bring these issues to our conversations.

So, for that, I would like to close by offering the IGF one suggestion.  You know, the IGF is a great place for bringing together multiple stakeholders but we still missing some critical to our dialogue on human rights and technology.  I would like to remind us that regulatory authorities that create the norms under which digital technologies operate should be brought in, like information commissioners, data protection authorities, human rights commissioners, they should be a regular feature and participants in the IGF meetings.

Media also is not really present enough.  And they are the ones below create awareness among the general public about the digital issues and bring their own experiences of opportunity and harm that digital technologies realize.

And policymakers that lead digital transformation that can help us create these pipeline of knowledge to policy on human rights‑based approach to digital development, and finally, judges and public prosecutors that are ‑‑ who are the ones who bring human rights to justice that need to be part of our conversation.  I will close here and thank you very much for the opportunity.

>> PEGGY HICKS: Thanks, Marielza.  I think it's that call for that event broader sense of inclusion is really wonderful.  Really appreciate that.

I am going in reverse order so I'm going to go next to Mallory, please.

>> MALLORY KNODEL: My concluding remarks will also comment on a couple of the questions we didn't get to.  I wanted to start by talking a little bit about privacy, because it didn't come up, and I think it is an interesting omission on this panel.  I think it's because ‑‑ and it's hard to overstate this.  I keep saying it.  But we have a really complex landscape.  And I think privacy is a good example of that.  We have been doing nothing but talking about privacy for the last 10 years, or more, right?  It's 2023.  2013 was Estonian Revolutions.  They were helpful to our argument.  It highlighted and demonstrated what was wrong.  And we did a great deal, especially the technical level, to fix that with rolling out transport encryption everywhere, ensuring people's connections and also now their DNS lookups are behind encryption but we have never had a bigger privacy crisis.  And it's worth introspecting on that.  Is it the business model?  Is it that you can still be targeted by someone who not only wants to survey you or not someone, but a regime that wants to survey you and also do you harm.  We have to do both.  We have to look at the big picture, rearchitect the way the internet works and we have to Zoom into the details and pay attention to how end users are being affected and that's just one issue.

And everyone is bringing in this incredibly important element of representation and of participation.  And so we need more, not less.  We have more issues to talk about, more dimensions to those issues to talk about, in more places.

So, I think one of the last things I wanted to just conclude on is something that was just very briefly mentioned but we haven't really confronted in this panel yet, which is maybe the creation of new mechanisms within the UN to talk about internet and AI and other things with the Global Digital Compact and so on.

I feel like we are never, actually, replacing anything.  We are only adding to the space.  That is not necessarily a negative thing in and of itself.  But something we have to reflect upon.  We are increasing the complexity of what it is we are trying to governor and what it is we are talking about when we are governing this and then the processes and how we actually do it.

I would caution us to really think about the opportunity there, but also the risk.  The opportunity is what the colleague over here from Brazil said.  We can bring in new and better and more interesting and fun issues from the next generation, real things that are happening in places that we haven't had enough representation of yet and deal with those and expand what we are able to address.

But I would just be careful that we do not take all kinds of social issues and put them into the technical bucket, that we don't technocrotize things.  Expanding this community rather than thinking about how we take our technical expertise and our technical conversations out into the world where the real end issue is, sort of, being discussed and being put forward.  I think that's another model where, yes, it's still contributes to the complexity but it's coming at the issues from a different angle.  Thanks.

>> PEGGY HICKS: Very interesting, and thoughtful comment there, Mallory.  Let's go over to Peter, please.

>> PETER KIRCHSCHLAEGER: Thank you so much.  First to thank the colleagues who asked these questions and, actually, want to dedicate my final statement to, basically, reiterate what they were saying, what at least I understood and I hope I paraphrase it correctly.  Basically saying, listen, we cannot deal with the visa issue if we are not talking about migration in a more systemic way.  And I think that's something maybe we can even be more self‑critical to us, to continuously ask us if you don't have ‑‑ if you have a concrete question, we need to look at it from a systemic point of view.  We need to look at institutions, we need to look at structures being maybe structures of injustice and we have to address them.

And the same goes with the strong statement on representation.  I think it's at least I hear a strong call to every one of us to keep continuously being self‑critical in about, you know, do we really live what we are talking about.  Are there any things we are willingly or unwillingly not respecting in our practice.  But also are there may be some blind spot we need to address.  Because I think the field, and it's my last sentence, I think the field of, let's say, legal discussion about artificial intelligence ‑‑ so‑called artificial intelligence but also the ethical discourse about it, at least has the tendency to run the risk to be very good in preaching, but not so good in action so far.

So, I think we can get a huge step forward if we start really taking action on what we are ‑‑ you know, have been writing on and recommendations, guidelines, et cetera.  Thank you so much.

>> PEGGY HICKS: Very good point.  I see Gbenga smiling and I expect many in the room are as well.  The practical side of what this all means on the ground is crucial.

I'd like to turn first to Cameran online.  And we saw you pop in for a moment, Cameran.  I hope you are still there to give us some thoughts from your perspective and we will close with Eileen.

>> CAMERAN HOOSHANG ASHRAF: Wonderful.  Thank you.  Yes, so, first I want to just briefly address a couple of the questions that were asked earlier.  One of them was about gender, and another one was also about queer representation.  And I think this gets to build on what Frederick said earlier, this gets to a point about how are we structuring at the big platforms and the tech companies what we consider human rights teams.  Oftentimes they focus on state‑based violations, the traditional human rights narratives.

But where, for example, does gender equity sit in an organization, when it's global?  And I think we need to start to think about repurposing human rights theme, what are we considering human rights, how are we defining human rights outside of, perhaps, just privacy and surveillance and freedom of expression, as I often would tell my students, there's more to being a human being online than what I say and who is listening.

So I think it's important for us to start to actually from the platform perspective, make some actionable steps towards understanding what human rights teams do and how they move forward.

With regard to the question about AI and labour displacement, I think it's really important component with labour involves access to factual and accurate information.  My academic is contradictized from a human rights perspective but I do think, for example, with Wikipedia it is a very vital resource for a lot of people.  They are able to get information, they might have not otherwise gotten and AI can actually be, even though I'm skeptical, in general here I think it's very good ‑‑ wonderful potential application of it because it can be used, for example, to translate articles, to translate articles across languages, can be massive gaps between different language, over 300 languages on the platform.

And also it can help people who, for example, whatever language they are writing in isn't their first language, there's opportunities also for individual communities and language groups around the world to build out the knowledge that's available for people that might help offset, unfortunately, some of the disruptions that will be happening and this is not from a late stage capitalist perspective.  But also to my final question, briefly, some members from the Wikimedia team are in the audience, if you would like to speak with them.  They are happy to chat about you with human rights AI or anything else.

And I started discussing dignity and I think I will go ahead and close with that.  You know, governments around the world, they have a basic charge to protect our citizens.  And to me, that means even the most repressive regime has some basic conception that people have a worth and that that worth merits protection.  We have a baseline there.  And I think going forward, it's really important that laws and regulations, you know, and social norms around technology, around the internet, around artificial intelligence, you know, continue to build upon that idea that we all have something worthwhile and worth sharing and worth contributing to.

And I would say especially when we disagree with each other, given everything that's happening, especially when facts are inconvenient.  So, I really do hope that we move forward with working on that baseline and remember that we are here to protect human beings online and offline and that includes digital access around the world and all of those who are advocating for free and open knowledge.  Thank you.

>> PEGGY HICKS: Thanks very much, Cameran.  And we are running out of time to quickly over to Eileen for one final word.

>> EILEEN DONAHOE: Wow, that last question over here, I just ‑‑ Gbenga, you said everything that needs to be said, I think, about inclusion and process and risks for people in the real world.

The question that comes up for me is I am not really aware of how the decision was made.  I mean, choices have been made in the past, Mallory, you know, Turkiye, Ethiopia, Azerbaijan.

So, I don't know how those decisions are made.  But, perhaps, as we think about the next phase of the IGF, that those decisions are made else where, different people at the table.  That's one idea.

I am going back to a couple of things I heard from colleagues.  Peter, actually the person online, the last gal from UNESCO, right?  She talked about the need for regulatory authority in terms of all the online content‑related harms.  Peter at the very beginning, you said well, we can't forget the other side of the equation.  Tech regulation itself has to be consistent with human rights.  And that, too, is a very significant problem around the world.

You also talked about not just the impact of technology for generating, facilitating violence against women or violations of human rights, but you hit the other side, which is technology should be applied to be the solution as well.  And there's always going to be this game of cat and mouse.  But that has to be done more.

And then last is Frederick and Mallory, you guys both emphasized this need for translation between the tech community and the norms community.  And I think that is a really exciting area and I think there's a lot of potential and a lot of growth in that space.  We have been talking about it for a few years at a very abstract level.

But I think people are starting to figure out what does it look like in practice if you are ‑‑ you are talking about human rights and AI.  How do we do those assessments?

Last one.  DPI was another, digital public infrastructure was brought up outside this room a lot.  And I see that as an area where you hit the inclusion problem, inclusion in the technology, tech for the SDGs.  But you also connect it with human rights by design.  So, you are, basically, bringing economic social, cultural rights and civil political rights together.  So, that's another area to be mined.

>> PEGGY HICKS: Great, thanks.  A lot of content there.  We are at the end of our time.  I think it's been a really rich conversation.  I hope it leaves all of you, as it does me, with not just some insights, but also some work to do in terms of what we can all do to pick up on the themes that have been brought out in this session and how we can both improve the rest of this forum and how we can build towards bringing these human rights issues and the human rights framework into the conversations that we want to have here and in other forums and in the next IGF as well.

Thank you all so much for your participation.  I realize there were some questions online that we couldn't get to.  I apologize for that.  And really look forward to having further conversations on this topic throughout the rest of the IGF.  Thank you.

(Applause)