IGF 2018 - Day 2 - Salle XI - DC Platform Responsibility: AUTOMATED DECISION MAKING AND ARTIFICIAL INTELLIGENCE (DCPR)

The following are the outputs of the real-time captioning taken during the Thirteenth Annual Meeting of the Internet Governance Forum (IGF) in Paris, France, from 12 to 14 November 2018. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

     >> MODERATOR: Okay. We'll be starting a couple of minutes late, just to allow everyone to find the room.



     >> MODERATOR: All right.  Good morning, good morning, everyone.  So my name is Nicolo Zingales I'm the chair of the platform responsibility.  I welcome you to our meeting.  We are hoping that people will manage to find the room.  There is a couple of panelists that are still missing.  Let's get started because we don't have much time. 

So this session will be about artificial intelligence and automated decision‑making and online dispute resolution.  The purpose of the session is to present some of the work that we have been doing over the course of the last year, trying to find some best practices on the dispute resolution and seat that in the way of artificial intelligence.  So I will by way of introduction tell you the dynamic coalition is focusing on the roles and responsibilities of online platforms and try to understand in an intermediated world in the context of private decision‑making, what are the safeguards that we need to have to make sure that from ‑‑ fundamental rights are being respected.

As part of this work, we have looked at what constitutes an effective remedy and how that can be implemented in the context of online platforms, we have looked at the terms of service of a number of online platforms and figure out where they have sufficient framework for users to have rights vindicated.  So before we delve into the specifics of each ‑‑ each individual platforms and how the right is being implemented in that context, we are going to have keynotes that will shed light on artificial intelligence and in particular how UNESCO is doing work on this, that will connect to the working group that developed the best practices.

It is my pleasure to introduce Moez Chakchouk, the assistant director of communication at UNESCO, and will tell us a little bit about the project.  So you have the floor.

     >> MOEZ CHAKCHOUK: Thank you very much, good morning, everyone.  I'm very happy to join this panel and I thank you for the invitation. 

As you know, and we welcome UNESCO, because we are really happy to host this IGF and look forward to working with different stakeholders on artificial intelligence.  And of course artificial intelligence is a new topic when it comes to UNESCO because we prepare to launch the initiative in UNESCO.  One of the major interests in the work in the CI and communication sector is how to coordinate between the different sectors to engage with different stakeholders on the topic of artificial intelligence.  The name of the panel, automated decision‑making and artificial intelligence is part of the duty to raise awareness, because we are a governmental organization, but how to power different stakeholders in the country level about all the important topics.

Of course, I think that we ‑‑ with this coalition, the dynamic coalition did a lot of work that is very important for us to engage and understand and to find a right to reflection about the future work.  What is important is when we address all of those issues, including the alternative dispute resolution through automatic decision‑making, which is work done by this coalition in partnership with different platforms.  This inspire us in UNESCO.  When we deal with artificial intelligence we cannot work with the organizations, and others are engaging and including the technical community are engaging on the artificial intelligence framework.  We think that whether to prepare analytical ‑‑ ethical normative work or to engage with the society for additional skills or to raise awareness, we need to work together.  This is what motivates me to be in your panel.  And of course, to have this kind of discussion.  You have to know, also, UNESCO as we host this IGF, we organize an open discussions meeting on Thursday, just after the IGF.  Not in our headquarter, but in the Mozilla foundation, which is in partnership with ISO with Mozilla and you are welcome to join us.

And you know that when it comes to our work in CI and there was a lot of messages and that about preventing hate speech and bullying and that information.  We have ‑‑ our mandate is link it to all of the topics, but we think that reinforcing online human rights through digital skills, media information, literacy and including the use of artificial intelligence and all of this kind of information that we can deal with when it comes to the platforms.

We think UNESCO can reinforce training and developing work.  We have in the science sector.  And we have partners with different organizations around the world, for more scientists and researchers to learn about AI.  We think artificial intelligence is coming, but at the same time, we're not all aware of member states, especially the governments are not aware of the policy challenges, especially in developing countries.  We at UNESCO, we think our role is how to make this clearer for them, but how to engage with different stakeholders in order to raise awareness and about the skills, about the responsibility issues and of course, by why you're considering all the online human rights and all the principles. 

Okay, I will be maybe interacting in the future so let the other speakers, maybe deal with all the technical aspects and we will be back to comment about them.  Thank you.

     >> MODERATOR: Thank you very much, Moez Chakchouk.  I want to introduce the co‑moderator.  Luca Belli who just stepped in.  He will talk about why we started the project, how it connects with the previous work done and in particular with the report we published last year.  Then we will continue the conversation.

     >> LUCA BELLI: Thank you ‑‑ can you hear me?  Yes.  Thank you very much, Nico, for the introduction.  I apologize for being late.  I was misled by an officer that sent me to the other side of the building.  Apologize for being late.

We have been working with the coalition that Nico and I created four years ago.  Doing a lot of ‑‑ growing, doing a lot of different projects, partnerships, putting forward solutions and it is very good to see that now some of them are really being considered as some potential drafts, let's say standards, or ‑‑ I mean good practices at least.  And this is really the core of our outcome of designing good practices based on what we already had.  It is ‑‑ the point is to create a continuity.  It is not to ‑‑ it is actually the work of the dynamic coalitions within the IGF.  It is not only 90 minutes or 60 Minutes as it happens right now debate.  It is a process that keeps on having different steps over the years and every step enrich the previous one. 

We have been starting with discussing what we could do, then we have recommendations in terms of service and human rights, because we understood that that was a main concern with regard to platforms how they could act respecting human rights, together with the principles, the platforms along with states have the duty to provide effective remedies, states have the obligation to respect human rights, of course, but as a corporate responsibility to respect them from the platform.

This is the starting point.  We started to deliberate the recommendations, including in the book, we published last year on how, platform regulations, how platforms are regulated and how they regulate us.  I will give a copy to Moez.  He was not here last year.  You can freely download it.  Everything is open access that we have produced so far.

Last year, when we released this book, we had a meeting that we used ‑‑ we had one yesterday with friends here also in the room, trying to understand what we could do to build up on what we had already done.  So that what we do is not wasted.  It is just a step ‑‑ one more step in the right direction.  The idea was indeed to create and try to identify best practices.  So to analyze ‑‑ to further analyze terms of service of platforms and identify how they solve dispute with the system while respecting human rights and the fundamental right of due process, which is something ‑‑ it is a positive obligation for states in that it is something that platforms as businesses have a responsibility to respect. 

We organized a working group.  We had another meeting at rights con with some members of the working group.  We further expanded the working group and developed a methodology for the study.  Then thanks to the work of those who were member of the working group, that there are also some of them here in the room.  And to whom goes all our gratitude, because this best practices that you now find available for comment on the IGF website, in the section of the intersectional work, you will find those are the best practices on due process for platforms. 

They exactly aim at identifying what are the fundamental guaranties that every individuals ‑‑ every individual should have, when using an alternative dispute resolution mechanism. 

Because we even tried ‑‑ these two core pillars of the automated decision‑making of the platform regulation as particularly relevant.  Alternative dispute resolutions and artificial intelligence.  Alternative dispute resolution is the classic way of doing thing.  You design an alternative dispute resolution, it is quite literal.  The artificial intelligence is something that is increasingly implemented.  We see for instance, Facebook using it to try to moderate or ‑‑ yesterday, President Macron was saying that is a line of thought that he's supporting and he wants to increase use of (?) to avoid hate speech.  Which may be a questionable statement, but everyone may have ‑‑ it all depends.  The question ‑‑ this is the question.  It all depends on whether ‑‑ what do we mean by artificial intelligence, what is it in the code, what data we utilize just branding something as artificial intelligence or "smart" does not make it smart or intelligent.  So there are a lot of questions for work that we want to further expand.  We have excellent panelists that are doing nice work on this already.  I just close my remarks saying you can all comment on the best practices that are on the website.  We, on purpose, we extended the comment period until 30th November.  I think it is useless to have comment only before the IGF.  It is important to have during and after.  To have the widest range of comments.  You are all invited to do so.  Nico, go on with your moderation.

     >> MODERATOR: Just to add to that, to make it clear to everyone, the difference between artificial intelligence and automated dispute resolution.  What we are seeing in the European level is proposals that require platforms to have some systems of automated decision making to accomplish the policy goal that the European Commission has in mind.  For example, in the visual media service directive, there is a proposal for revision that requires member states to ensure that on the platforms, users are not targeted with hate speech or there is no content that is offending minors.  And they must be a dispute resolution mechanism that allows anyone that is adversely affected by the automated systems to appeal against it.  So that is something that is in the proposal.  But there is no specific requirement on how this must comply with basic principles of due process.  That is where we hope to fit in.  Similar proposal in the copyright you might have heard, copyright 13, which requires platforms that make available large amounts of content to adopt adequate and effective content recognition technology.  Which basically mean YouTube's content I.D., of that type of systems.  And again, it is required that member states ensure that there is an effective complaint and redress mechanism for those affected. 

And a proposal by the European Union that regards platform to business fairness.  This is not with regard to consumers but with regard to businesses that might be adversely affected by online intermediation services.  Again, there is a requirement that there is an alternative dispute resolution that is offered within the platform to make sure users have an effective and quick dispute resolution, in addition to that, there is the requirement that platforms indicate the mediator to go even outside the platform context for a quick resolution, without going all the way to the Court.  And in this proposal, the European Commission also requires platforms and online intermediation services to make available transparency reports, regarding the use of this alternative dispute resolution mechanism.  So just how many complaints are received and how many are granted, in which these complaints are lodged.  I think that is something that complements the work that we're doing here.  So given that having an online dispute resolution mechanism becomes more and more central to the goals of the European Union and I think increasingly worldwide, there is a need for that in addition to artificial intelligence to detect as a first instance what content might be legal, now, we will hear from other panelists, also, what are the challenges when it comes to making that initial determination?  So the artificial intelligence, decision‑making, it identifies content that might at first sight be illegal.  What this can generate in terms of chilling effects and impact on fundamental rights of the users.  So we don't have Google's representative that was supposed to be here, but in the interest of the time for the discussion we'll move on to the next speakers on the panel.  So we'll have first of all from digital rights, Natalie Marechal.  I'm not sure ‑‑ you can correct me.  She's working for ranking international rights, I will let you introduce the project, you are focusing for the work in the next year over targeted advertising and automated decision‑making.  I think that is very important work.

    >> NATALIE MARECHAL:  That's right.  Thank you, Nico.  So as Nico mentioned, I work for ranking digital rights, I have a bunch of literature that I would rather not bring back to the U.S. with me in my suitcase.  Please come up and chat after and I will give you four‑pagers, whatnot.  Ranking digital rights is a research initiative that works with the global network of partners to set standards for how companies in the information and communication technology sector should respect human rights with the emphasis on freedom of expression and privacy.  We have had first success since the first index launched in 2013 at documenting company's public commitments and disclosures related to how they respect their user's privacy and free expression rights.  As well as engaging with companies to work with them to provide greater commitment to protecting the rights and greater transparency about the specific mechanisms that companies employ to that effect. 

There is a lot more information and data visualization on the website, rankingdigitalrights.org, which I invite you to visit.  As Nico also mentioned, we are thinking ahead for future versions of the corporate accountability index on how are we going to help guide companies and hold them accountable for respecting human rights in the context of targeted advertising and automated decision‑making, which use has many uses of artificial intelligence.  The work of artificial intelligence is in the early stages.  As I'm sure you have noticed, there is a plethora of reports and meetings on society and artificial intelligence and rather than work in parallel to the important efforts, we decided to wait and see where the conversation went over the next year, and glean insights from people in this room, throughout the building and the world, that stage.  The work on artificial intelligence is preliminary, but the broad idea is that we're going to be working with partners such as yourselves, perhaps, to identify specific commitments and types of disclosures that we're going to expect companies to make around the use of automated decision‑making to guarantee to their users and the broader global community that they are in fact taking human rights into account when they do that.  Now, when we talk about artificial intelligence, it is important to keep in mind, there is no such thing as general artificial intelligence.  Hal is not here yet.  Nobody is coming to kill Sara Connor just yet.

But there are many kinds of narrow artificial intelligence that are perhaps best thought of as tools for analysis, but should probably not, in most cases completely replace human decision‑making.  When we do allow decisions to be made by AI systems, it is important to understand that the humans that designed and deployed the system have delegated their responsibility to make decisions and the people should still be the ones that must be held accountable, right?  An AI system itself cannot be held accountable.  It is the human beings and institutions behind it that must be held accountable.  The decisions that are automated exists in a consequence, it ranges from life or death decision.

I saw a story on Twitter that the British government is thinking of launching a fully automated killer drones.  I didn't read the full story because I was rushing to find the room.  But if that is indeed underway, that is of grave concern.  There is also automated decision‑making taking place in the context of the justice system in many countries, of course, deciding whether or not to deprive someone of their liberty is something that should not, to my mind, be something automated without human intervention.  Similarly, automated decision‑making systems that help classify in different countries which ‑‑ where students might go to university, for example.

So these are decisions that come with grave consequences that I think should involve human beings who can be held accountable, who can use their better judgment in context and use human judgment in making the grave decisions.

On the other hand, AI systems are used by platforms to do things like order our news feeds, decide which targeted advertising to show us and much more.  In the aggregate, the decisions can have very serious impacts on society and individual lives, at the individual level, each decision is not that consequential.  And the scale here is such that it is impossible to imagine human beings being involved in each individual decision to show an ad or social media post to a given user.  The scale is way too big, right? 

Nonetheless it is important that human exercise oversight over the systems and be held accountable when they fail to properly exercise this oversight.  Facebook's responsibility of aiding and abetting racial cleansing in Myanmar, and others like interfering in elections in other countries. 

We're talking about two things.  One, how do people that create and deploy AI exercise proper oversight over the systems. 

Second, the other actors, governments, Civil Society, and each and every one of us as citizens hold that first group accountable.  The big question for me here is due process for citizens that interact with decision‑making systems.  Can they appeal to human reviewers and make their case?  Do they have access to meaningful remedy on the U.N. guiding principles on human rights.  There are limits. 

What kind of remedy ‑‑ again, picking on Facebook ‑‑ what kind of remedy can we expect Facebook to provide Myanmar or the Philippines.  Some of us have seen the in depth news coverage that demonstrates the specific ways that Rodrigo Duarte used Facebook to win an election.  And he's a brutal dictator, no ways around it. 

There are harmful purposes that have led to human rights violations.  Optimism ‑‑ something I have noticed lately, conversations about protecting human rights in age of AI or mitigating bias in AI, start with 30 minutes of platitudes about how wonderful AI could be.  To me, in my mind, that is about just as strange as going to the climate conference and talking about the wonderfulness of industrialization. 

I think focusing on the benefits that technologies can bring is kind of beside the point.  We know that, the benefits will come, whether we spend time thinking about it or not, because all the incentives are there, right?  What we need to do is think through very hard about what steps we will take now to prevent the worst undesired effects of new technologies.   From coming to pass.  Specifically, there is a false belief to solve problems, right wing extremism, harassment, threats to journalism in many countries, these are political problems that ultimately will not be solved without political solutions.  Technologies like AI, like machine learning, et cetera, can be useful to aid in analysis and implementing solutions, but they ‑‑ in and of themselves, they cannot be the solution.

     >> MODERATOR: Thank you, Natalie.  I think it is particularly interesting that you refer to the case of targeted advertising that influenced the election.  Because this might be an issue where it is not about individual remedy, but maybe it is the whole community that needs to have a way to contest the legitimacy of the mechanism.  Okay.  That is food for thought.

Let's move on to the next speaker, Nic Suzor, from Queensland University of Technology.  Nic, you have the floor.

     >> NIC SUZOR: Thank you for allowing me to be here, it is complex, but massively important topic.  I have three points to make quickly to make the decision.  The first I will not spend much time on because Luca spoke about due process and alternative dispute resolution procedures.  Safe to say that we need now to re‑imagine due process and accountability in a way to enable us to track and hold to account massive systems that are operating at a scale that existing institutions just can't cope with. 

So the traditional way that we deal with due process is through a fantastically expensive court or judicial system.  And this is ‑‑ it is just not feasible to expect, in the day‑to‑day governance of AI, at the scale of automated decision‑making that we're talking about ‑‑ even if we're just talking about content moderation, for example, we're talking about tens of millions of decisions every week.  And at an accuracy of about 98%, which is roughly where we can get to with current tick, that is tens of thousands to hundreds of thousands of mistakes every week.  Nothing that the traditional apparatus of the state can deal with.  We need some sort of escalating system, some sort of more legitimate appeal system that works, at scale to deal with the initial sets of problems and has the opportunity to move over, perhaps into the judicial system and perhaps not.  But the function of due process is something we need to re‑imagine in a way that can work at scale.

There a fundamental trade‑off here that concerns me.  Because we ‑‑ we are ‑‑ at a stage where we need AI to be able to assist decision‑making at that scale.  But there is a trade‑off between nuance and context sensitivity and speed.  And consistency.  And in that trade‑off is the problem of due process.  That you can train machines to be incredibly consistent on the data that they've seen in the past.  You cannot train them to be able to take context that is not reducible to the data in their training sets into account.  And we need to figure out how you do that function at the sort of scales that we're talking about. 

I want to take a step back.  Because what I really want to talk about is I guess constitutionalization.  The idea that we need to re‑imagine not just due process, but the entire system by which we hold power to account. 

Old forms of protecting rights ‑‑ and you know, state centric ways that you ‑‑ that you encourage and promote human rights with the state as the dominant actor, that you protect fundamental constitutional rights, mainly against the state, those are very difficult to apply to complex I had braid assemblages of humans and machines making decisions that affect our human rights but in the private sphere.

There is a difficult question ahead of us about how to re‑imagine not just due process but constitutionalism as a whole help how do you hold power to account in a system where these are massive, massive social and technical systems that are influenced, pushed, pulled in different ways by many different actors.  The state here is not the only regulator.  We need ‑‑ I am heartened to hear Natalie's presentation and in general the work of organizations like RDR, because if anything, if we are going to hold this sort of decentralized power to account, we need decentralized systems that can honor and hold accountable.  Which means for me, we need better collaboration.  I will say multistakeholderism, because I am at the IGF.  But I do mean it, in the sense that what ‑‑ the change we are looking for is nothing less than the self‑constitutionalization ‑‑ the decision amongst the people who make and deploy automated decision‑making technologies, to hold themselves accountable against public interest values.  That is a massive change.  It is not something that will come easy.  It is something that we will need as a multistakeholder community, to exert massive pressure on those that are developing and deploying automated decision‑making systems to internalize the fundamental rights.  It is not something to be solved or bound by various statements.

It needs to be developed to hold find new ways to hold power to account.  For us, that means new methods to understand not just isolated examples, but to be able to work out how are the large interests working at scale?  Whose interest and values are they representing and who is left out.  To work with the NGO who are better to use the public debate and have better conversations with developers and those procuring assisted automated tech.  And those that need better information and visibility about the risks and types of interventions ‑‑ not just legal, but social and technical as well that might be effective in working on the processes.

That is not easy.  I think it is a pretty important project.  There is a couple of ‑‑ I will just end it there on that point.  I'm keen to get to the discussion, I'm out of time already.  I want to emphasize that no one actor is involved in regulation at the moment, regulation is the sum of a lot of different forces.  If we really care about the public interest, there is no single approach that will help improve the quality of decision‑making.  We need to think about how to do that in a decentralized distributed manner.

     >> MODERATOR: Just a comment.  I totally agree.  An element of reflection, our fundamental rights and liberties is defined in a dichotomy.  The individual against the public, the state.  That is not the only power we have nowadays.  There are corporations that are much more powerful than the strong majority of states.  We have to redesign not only the tools, but also the way we think about power and asymmetry of powers that are created by the Internet and other benefits, and also brought challenges, and we have to redefine how we interface with this and hopefully, maybe, from our work, something could emerge.  You have comment on this?

     >> LUCA BELLI: It is interesting what you emphasize and highlighted, I want to add developing countries and much as the platforms are created in wealthy economies.  When it comes to other continents including Africa and nondeveloping countries, I think cities are much more vulnerable.  So if we don't raise awareness and the need of having a universal, global understanding, a global movement, effort, that can join the efforts to solve or discuss the issues, not just regulate.  I like the comment.  It is very important.  We don't have access to have black boxes on AI that will be used by different cities and countries.  And they say oh, this is protective values or whatever.  To be so ‑‑ we need to (?).  Sorry.

     >> MODERATOR: That is a useful comment actually to refer to some of the work that is done on the best practices.  There is the possibility for certain users to research content details of NGOs that.

Might help in a particular case where the content is removed.  Certain platforms have in the terms of service, the content information.  That is always those that we have app ‑‑ analyzed, U.S. based NGOs, it would be helpful for U.S. users, when it comes to the legality in other countries, there is no effective mechanism for users to be in touch for someone that can help in challenging the removal.  That is perhaps something that can be explored, you know, for future best practices

Now, let's move on to our final speaker, Marta Cantero from Madrid, doing a lot of work on effective dispute mechanism.  I asked her to focus a little bit on the artificial intelligence aspect of that.  I hope you can take the floor now.

     >> MARTA CANTERO: Thank you, Nicolo, exactly.  As part of the DC as it was introduced by Nicola and Luca they are coordinating the best practices for platform responsibility, one of the part of the best practices regarding the safeguards around the dispute mechanism.  This is aimed to raise the standards of protection of platform users and in particular, I am going to present some of the best practices that we somehow suggest, for effective, unfair platform dispute recognition procedures.  This follows up on nick's presentation, because we're focusing on how to proceduralize certain aspects related to the dispute resolution in the context of digital platforms.  Effective and fair, like the two messages I'm trying to convey here today.

Effective, because we need to have a meaningful involvement of the parties when it comes to the dispute resolution mechanism, so it is effective and doesn't remain useless.  And fair because this procedure must respect certain basic rules of due process.  Take into account the best practices as being developed in the context of the right to an effective remedy.  So in principle this procedural safeguard are aimed at enhancing the right to an effective remedy. 

I'm not presenting all the best practices, but once more, we invite you to look at the IGF website where we can find all the best practices.  You have until the end of the month to look at them and comment on the specific examples that have come up while doing the research.

So the way we present the best practices correspond to the formulation where we suggest the practices shall implement certain practices, in those cases, this should be minimum standards that platforms should respect with regard to due process and we utilize the relationship when we simply or merely suggest the different practices the platform should implement in the policies.

Just to give you examples, platforms should have a dedicated mechanism for dispute resolution between users, here, for instance, we came up with the example of Airbnb that provides dispute resolution center that helps users settle disagreements among them.  This mechanism will be more effective to provide incentives for traders to actively and meaningfully engage in the centers.  As we know, if you rent your room by Airbnb, or third party, that is a great platform because of the visibility and people want to be part of it because the network effects of the platform, the many uses for those that are already Airbnb members.  One way of increasing the engagement of trades is the (?) that do not engage in the dispute resolution procedure can be delisted from the platform if they don't respond meaningfully to the claims made by a certain guest.  This is a suggestion.

Another recommendation that we come up while looking into different practices by platforms is platforms should provide detailed and clear explanations concerning any requests that made or notifications made in the context of dispute.  In that regard Twitter offers many useful informations for that request or in the context of a dispute.

Once more, you can look at a specific recommendation, practices in the website.  And the document, we have produced.  But more in the context of dispute mechanism we suggest that the platform shall provide an alternative dispute mechanism for dispute that arise in connection to the disputes between the user and the platform itself. 

Here, we also suggest the respect of certain due process mechanisms and in particular, here, we need to have transparency requirements.  So how do platforms resolve a dispute between the user and the platform itself.  Here, we're to suggest the adherence of certain internationally recognized standards, in the resolution or certainly internationally approved arbitration rules.  By any means, the results of this dispute mechanism should be compulsory.  What we suggest is this dispute resolution mechanisms are voluntary hence any that we still use.  And should always be able to opt out from the resolution system.

I don't want to take much longer on this, but once more, just to conclude with one of the final suggestions that we have.  We suggest a platform to provide sufficient reasons where they are adopting a measure.  This is in connection for instance, to the use of automated decision‑making and artificial intelligence.  With this, I conclude looking at the recommendations because we also suggest that any requests for content removal shall only be responded after an internal human review.  This is important because each ‑‑ even though automatic flagging has become useful to protecting, for instance, legal content to be removed, for instance, on YouTube we have found that up to 70% of the content removal by YouTube has been made thanks to automatic flagging issue once more in the line of what Natalie is saying, if you are using artificial intelligence, what we need is to have some trustability and get back to some sort of human accountability concerning content removal and account deactivation.  These are principles that will ensure due process and interaction with the platform.

As a general remark, I think by adopting this best practices, platforms will contribute to achieving regulatory and policy goals.  So in this way, platforms act as proxies for the achieving of certain standards of consumer use and protection and to achieve regulatory and policy goals.  That is where ...

     >> MODERATOR: Thank you, Marta, for your comments.  I think it is interesting also to highlight the need for overview, the need whether it is automated dispute resolution or artificial intelligence, they need a human overview for the reasoning and criteria to which the decision is taken.

We have had very interesting food for thoughts to open the last part of the session for debate.  I'm sure there will be comments.  I already see a hand there.  Just raise your hand, and we will open ‑‑ the floor is yours.  Chris, go ahead. 



    >> AUDIENCE: Chris Mullens, University of Sussex.  If I could speak on what Nic was saying in the comments to consider the round and Luca as well.  The regulatory travel, if you have a statement in the top level of the principles, you need the implementation to take case at the lower levels.  In this case, the human rights is where we start, go to the state level, regional implementations, so on.  I think it is important to recognize the geometry of regulation taking place and the decision made as Marta is saying, the decision of the individual decisions has put content back on line is the Boston visit with the regulatory period.  That is important upon I wrote a report for the European parliament for the artificial intelligence and misinformation, in the ways in which this is regulated.

There is one key take away to recognize, politicians are recognizing the houses are on fire because the jobs are at risk because of the Internet.  We have their full attention as president macaron made clear yesterday.  We have ‑‑ Macron made clear yesterday.  We have to pay attention because content moderation costs money and journalists effectively.  The tools we are asking you to understand when it comes to hate speech, fake tools, that is tools of journalism.  We allowed platforms to siphon the money off for their own profitable businesses.  In actual fact, we have created at scale, platforms that are spreading information globally between us but without the moderations in place to do so.  The platforms will not moderate properly.  They were trying to solve the problems without is to have skilled intermediaries and we need them back.  To do that, it requires the regulatory action to say, you will have to demonstrate you are staffed to do this.  And not staffed by using some mechanical thing to get $1 an hour and on the spreadsheet say this is the correct item on the Facebook.  Which brings you to the Facebook breast‑feeding nonsense, where any representation was seen as being incorrect.  I am wondering how people think we can go from A to B.  One of the problems we find in the discussions.  Not so much here.  DCPR is trying to show how we go from principles to effective implementation. 

I would love to hear the panelists' comments on how we achieve the things that are expensive for platforms to implement.  Thank you. 

     >> MODERATOR: That is the one‑billion dollar question.  Yes.  (Chuckling) I don't know if anyone on the panel has a one‑billion dollar reply.  If you have, please go ahead.  And then we'll take a couple of other ones.

    >> NATALIE  MARECHAL: I guess the billion‑dollar answer is I don't care how expensive it is.  They have the responsibility to ensure human rights.  They're raking in billions based on our activity selling the data traces of what we do online and offline.  They profit from surveilling us and renting out access to our eyeballs to advertisers, right?  I think there is a bigger question that society has to have about whether or not this is an acceptable business model.  As long as it exists they have a responsibility to deal with the human rights impacts of their activities.  Frankly, I do not care how expensive it is, they still have a responsibility to do it. 

(applause)

     >> I might make a quick interjection here, what if tech companies paid taxes?  We might have a little bit more money.

     >> I want to add, the companies are using the platforms to shape our future, this is more than one million.

     >> Yeah.

     >> MODERATOR: So it is now a one‑trillion question.  If anyone has an answer ‑‑ I mean, I think we all agree that this is something platforms have the responsibility to do as they have the duty to pay taxes where they operate.  That is something we all agree.  But still the one trillion question is how to do so.  So do we want to give me the floor ‑‑ that  gentleman raising his hand.

     >> AUDIENCE: Hello, good morning.  I don't have an answer to that.  I would like to ask another 100 million questions.  I am from a Brazilian rights organization.  I think we have to discuss the issues on platforms.  We talk about responsibilities and for a state to enforce any of the measures that we can imagine, it is hard to enforce with Facebook, with three billion or 2.6 billion users with all the apps and powers that the platform has gained since five or 10 years less or maximum.  So I think we should discuss this landscape with an economic eye.  How can we get real competition, how can we bring this into the era, this is an interest of OCD and other organizations.  I'm not seeing IGF ‑‑ that the stakeholders are kind of concerned about it.

I think that even if we have the answers, the one billion and one trillion answers, when our governments start to try to text them or try to enforce the measure we want with content regulation and other stuff, we will have problems.  In Brazil, we had an election where WhatsApp was the main platform for information and misinformation.  The Court couldn't talk to WhatsApp because it was in California and department want to adopt information.  It is now Philippines and Brazil, where they produced from the right wing government.  I would like to discuss how to get the platform reliability.

     >> MODERATOR: That is a huge question.  Perhaps another dynamic in itself.  I think it is connecting what you are saying with regard to having a human in the loop and enabling people to understand.  I think a lot can be done through application program interfaces.  In competition law terms, this is a problem that can be solved if, you can enforce interoperability.  Not only for users to migrate to other platforms easily.  If we think about misinformation and how to combat that, you could think about another particular platform that allow users to filter what they are seeing.   Enabling Facebook or others to make that interact possible.  Provide an AIP where they can interact with users, a filter what content they're seeing.  That is a promise solution for the future?

     >> I will react to this comment.  I understand your concerns, I understand in multieconomies.  What is important to highlight the education and the media information that we receive says key issue when it comes to disinformation and fake news.  Here in UNESCO, we have approved to have media literacy and go to all member states with tote and people in the place.  We agree platforms, it is important. 

We need to understand what kind of information need to be filtered and how we interact ‑‑ how we interact with all the information brought to the platforms.  It is a matter of platform.  There are countries where it is not available.  The government is aware of the importance to educate people to prevent fake news, as it was in different countries, just prior to the lakes.  That is how UNESCO will differentiate, different NGOs and stakeholders to do with this.

     >> MODERATOR: A quick comment.  I have been dealing with this for years.  You only have the fake news problem where the individuals are not able to distinguish between a charlatan and true fact.  Go to school, something they will be able to look at and tell if it is house food or true fact.  Here, the true multistakeholder is involving individuals.  And here back to the comment earn of paying taxes.  If you want to operate with gaining hefty profits on what you collect, you need to gather taxes but those need to be earmarked and used for education.

    >> NATALIE MARECHAL: That is what I mean when I say political problems demand political solutions and personal problems demand personal solutions.  We are focusing on the economics.  How do economic incentives drive their behavior.  One of you mentioned have the Facebook or Twitter feed go to another platform to control the algorithm.  That has been floating around, but it has not been taken up.  If they lose the ability of controlling what our eyeballs are exposed to that is a huge threat to the business model. 

So look at economics and the financial incentive structure and represent the company and business models.  That is the issue of addressing the content problem.  It is not content at its core, it is an infrastructure problem.

     >> MODERATOR: I think we have provoked you enough.  Unfortunately, we are already out of time.  We can continue with questions outside of the room.

     >> We're out of time, because the MAG decided to slash our time from 90 to 60 Minutes.  I urge you to complain and write to [email protected].  I mean, the 30 minutes of comments and debate are meaningful.  Thank you very much.

    >> NATALIE MARECHAL: Please relieve me of the brochures.  Please relieve me of the brochures.  Thank you.