IGF 2017 - Day 2 - Room XXV -WS68 Fake News, AI Trolls & Disinformation: How Can the Internet Community Deal with Poison in the System

 

The following are the outputs of the real-time captioning taken during the Twelfth Annual Meeting of the Internet Governance Forum (IGF) in Geneva, Switzerland, from 17 to 21 December 2017. Although it is largely accurate, in some cases it may be incomplete or inaccurate due to inaudible passages or transcription errors. It is posted as an aid to understanding the proceedings at the event, but should not be treated as an authoritative record. 

***

 

>> ‑‑ of artificial intelligence.  I noticed that there were a lot of emerging artificial intelligence tools that are coming out that could dramatically enhance the impact of computational propaganda, and that really drew me into this because it's a scenario that we really need to pay a lot of attention to and focus on.  So I'm really glad we're having this session today. 

I want to start off by mentioning that we make sure we have this all conversation and all of our solutions are focused on preserving the freedom of expression and avoiding censorship.  That's really important in this conversation because that's often a very reflexive response to a lot of these types of problems. 

The U.S. government has always promoted a vision of an open, interoperable, reliable and secure Internet and freedom of speech is really core to that.  We believe that all of the rights that people have offline, they should also have online.  And one of those is freedom of expression.  So you know, a lot of the Internet's value for all of us is really in how we connect to each other, how we use it to associate with each other, how we use it to express our feelings and our views and our ideas.  And that connectivity, we don't want to do anything that really negatively impacts that vibrancy and entrepreneurship on the Internet.  So let's just keep that in mind as we're talking about this. 

I would also like to see rather than focusing on the problem, a lot of people know about the problem and it's been discussed a lot, I really like to see people focusing on solutions.  I know there's a roomful of fantastic solutions out there.  People here might not know solutions that are working over here, and that sort of connectivity I think will be really valuable.  Thank you. 

>> Thank you, Chris, and thanks for putting this panel together.  Thanks, Matt, for your opening comments.  I'm Alina Polyakova of the Brookings Institution.  Until recently, I was director of research at the Atlantic Council focusing on Europe issues as well which is where I connected with my fellow panelists here. 

These last few years I've been working a lot on looking at how state actors specifically have deployed digital tools in the form of political warfare or hybrid warfare.  Clearly Russia has been at the forefront, I think, defining what it means to manipulate public discourse online with the use of computational propaganda, trolls, et cetera, but also the deployment of state resources like media funded by the Russian state specifically and also cyber attacks that are used to glean information and then leave that information out in the open, based on those leaks and stolen data.  

Now in the United States, we're, I think, at a watershed moment where we haven't been in any recent (?) For some time and I can tell you that even two years ago, nobody cared very much about it.  But if we look at other places in the world, particularly Ukraine, Georgia, other Soviet states and now even most European countries, we can see that there's been a sort of evolution of the tools of political warfare.  The Russian government is heavily invested in ‑‑ and not just the Russian government, these tools are becoming cheaper.  These technologies become more prominent, more accessible to the population.  They become more easily deployed, and they become much more difficult to detect, not just for Civil Society actors, investigative researchers and reporters, governments, but also for the platforms themselves, Twitter, Facebook and Google.  Overtime as the AI tools Matt was mentioning have become increasingly more difficult for us to understand how do we make sense of what's real and what's not in the digital environment?  And these are the things looking to the future that we should be thinking about, thinking about solutions and policies and what Matt said, I do hope that this conversation will mostly focus on that.  Because it's clear to me and I think to most people in the field that to respond to various forms of digital manipulation, whether it be from state actors or nonstate actors, we need to have the whole of society respond.  That means the local governments at play, there's a regulatory environment that can be set up, that should be set up that would vary from country to country. 

There is a lot of what the Civil Society can do, independent voices, independent media to counter bad information, misinformation.  I really don't like the term fake news.  I was telling the panel because it's obviously become a political term that has been used by portrayants and others who are trying to debunk truthful information and, you know, basically use as a bullying tool, more or less.  But there's still also a role for media and companies have to play as well.  I think this is a place that we need to be focusing on.  So understand how do we set up these avenues of dialogue, information sharing and cooperation from the public and private sector that will actually take us out of reacting to every single attack or to every single disinformation campaign and to get ahead of the agenda and actually set the narrative before it is set for us.  So thank you. 

>> DONATIEN NIYONGENDAKO: Thank you.  My name is Donatien Niyongendako.  (?) Seek to stress the work of human rights defenders throughout the region by introducing their inability to the risk of (?) And the capacity to effectively defend human rights.  In countries such as Burundi, Kenya, Sudan, (?) And Somali.  (?) Working to facilitate intelligence, strategic (?) Of technology by eliminating defenders in Africa. 

(?) Interlinked and digital sites, application and support and data (?) For human rights organizations.  About our discussion today of the fake news, it is a crucial issue in the region (?) Especially (?) This often happens in election scenarios (?) During elections.  Here it's good to discuss about what attitudes to adopt in the face of fake news information, how to protect the disinformation has been poisoned.  What objectives ‑‑ or who are the sponsors?  What are their ambitions?  What are the sort of fake news?  How often information has been (?).  So we can discuss about this and get solutions in the fight against disinformation in Africa and the whole African region. 

>> PANELIST: Everybody cross your fingers to the technology Gods here.  And we should be joined by Sam Woolley from the speakers. 

>> SAMUEL WOOLLEY: Can you hear me?  Chris, can you hear me there? 

>> PANELIST: He's in here.  In the little boxes. 

>> SAMUEL WOOLLEY: Yeah, can you hear me?  Chris, can you hear me? 

>> CHRIS DOTEN: Okay. 

>> SAMUEL WOOLLEY: I hear you.  Do you hear me? 

>> CHRIS DOTEN: Okay.  For people who may not have caught that, it's Channel 0 and you can up the volume.  All right.  Hello, Sam. 

>> SAMUEL WOOLLEY: Hi this, Chris.  Can you hear me? 

>> CHRIS DOTEN: We can hear you. 

>> SAMUEL WOOLLEY: Okay, great.  So what I want to say is my name's Sam Woolley.  I'm the founder of the co‑founder of the computational propaganda project at Oxford university.  I'm currently starting a new lab in Silicon Valley called the Digital Intelligence Lab.  The goal with the new lab is to engage with the makers of tech in San Francisco, specifically Facebook, Google, Twitter, to engage on questions of how technology gets used to manipulate the online sphere. 

I want to talk about three things to watch with computational propaganda just really briefly. 

>> PARTICIPANT: Can you speak louder?  Thank you. 

>> SAMUEL WOOLLEY: Three things to watch and three things to do.  Is that better? 

>> PANELIST: It's marginally better. 

>> SAMUEL WOOLLEY: Yeah, I'm speaking quite loud, so.  I'm not sure how much louder I want to speak.  So without shouting.  Three things to watch are issue public specificity.  We've seen a lot of attacks that focus upon specific social groups lately around the world, online.  We've also seen a lot of back‑end and front‑end social attacks.  It's important to remember that these attacks don't just occur through the front end of sites.  They don't only occur by sharing what we call fake news or disinformation.  They also occur by trying to manipulate algorithms that show trends or that show things in our newsfeed.  And we've also seen a lot of government‑sponsored trolling through bot usage which results oftentimes in chilling effects or censorship especially of journalists. 

Three things to do, right now we're doing experimental bots research showing, you know, the actual effects of how bots are used in computational propaganda attacks.  I think this is essential.  Second, we're working to follow the money.  Who's behind the usage of computational propaganda worldwide?  And third, we're doing ethnographies with makers of this technology so we actually learn about who's doing the building rather than only focusing upon the users or upon sort of the broad system.  So that's where my head is at in this right now. 

>> PANELIST: That's great.  Thanks very much, Sam.  Thanks for straining your vocal cords for us and thanks, everybody.  So I think with that as openers, let's get the conversation started.  So any stories, thoughts, solutions particularly that you'd all like to share with the group here today?  All right.  Well, Matt had some further thoughts that he might want to start us off with. 

>> MATT CHESSEN: So just a couple of lessons that we've learned from our public diplomacy, and I'll just read off a few of them.  So one of not imitating the enemy, the malicious actors.  This is really important.  We shouldn't play the disinformation game.  We need to maintain our values and our ethics and not use these tools the way that some of these malicious actors are using them.  Having a credible message based on facts and evidence that acknowledge underlying grievances is important as well.  Partnering with credible independent and trusted messengers, especially local messengers.  That's really important for governments.  A lot of times, you know, the government comes out and says something, well, people have all sorts of things and they might not believe them.  But if it's actually people in those communities saying those truthful facts, people might be more likely to believe them. 

Using technology for the right way, which is to try and identify the audiences that you're trying to reach and the best approaches at reaching them.  Using analytics to evaluate the effectiveness of your messages and then feeding that information back into your communication processes.  And more importantly, you have to have that sort of top‑level support in your organizations to be able to engage in this type of activity. 

A lot of the research ‑‑ there was a great paper out of Rand called the Firehose of Falsehood that really acknowledged that people can be easily manipulated by the first thing that they hear and then using countermessaging a lot of times doesn't work because it just reinforces the original message.  Some alternative things to do might be actually directing a stream of truthful message so there's no gap for these malicious actors to fill.  Those are some of the things we're thinking about in government for countering some of the issues around disinformation and computational propaganda. 

>> PARTICIPANT: An idea, Johnny from the University of Cambridge.  You know, the computer science community could self‑regulate, pressure could be on that community more broadly for the rest of us to self‑regulate the same way that the legal community does, the medical community does, some kind of internal infrastructure to identify bad actors and to promote ethics.  Ethics could be taught in computer science classes in colleges and universities, basic stuff that might help promote better conduct. 

>> MATT CHESSEN: So that's for the people actually building the algorithms, software that undergirds all these systems. 

>> PARTICIPANT: It's not going to be a catchall but could hopefully help send things in that direction. 

>> PARTICIPANT: I'm Hans Klein from Georgia tech.  One of the panelists said fake news is a bad term, and I would agree with that.  One of the things about political warfare consists of simplify amplifying dissent, and dissent that's arguably legitimate.  So we see examples being posed of Russian actions, Brexit, the two most extreme claims are that Russians did Brexit, and the Russians did Donald Trump.  Now, even if that's true, there's a legitimate group, significant portion in this population who back up the idea of Brexit, who think it was a good idea.  And obviously among American voters, there's a substantial number of voters who backed Donald Trump.  So if information warfare takes its main modus operandi is to fight and provide different interpretations, legitimate interpretation, then where does that leave us?  Because it's one thing, that's fake, let's get rid of it.  But it's if, hmm, that's dissentful, let's get rid of it, then we have a problem if we find ourselves in that second situation. 

>> MATT CHESSEN: So responses from up here.  And hold up two fingers if you have thoughts or comment.  Alina? 

>> ALINA POLYAKOVA: It was the Russians.  Just to respond to your question and Russia, also, I don't think we should attribute too much to Russian actions or any state actions for that matter where we currently are and their ability to actually manufacture events, right?  So Donald Trump wasn't elected because of the Russian information campaign.  Brexit didn't happen because of the Russian information campaign.  The movement in Catalon ‑‑ in Catalonia did not happen because of the Russian information campaign, right?  But your point that this is how active measures work, this is how the Soviet Union did things as well, you find certain tensions, fissures in society and then you plan both sides.  There's some very clear evidence at least as far as Facebook has released some of the RT ads that the Russian propaganda media arm took out during the election campaign in the United States.  The police, when they did their congressional testimony, which I would encourage everybody to read, it was really fascinating, November 1st, the committee, the act clearly showed that they would not be finding in a political sense of the term, it wasn't that they were endorsing a certain candidate.  They had some users who were posing as (?) Activists and had some leaders posing as users, I mean accounts, posing as pro‑gun activists.  Then they had these advertisements for Hillary, right?  And they would target these specific ads to very specific parts of the U.S. population, very specific states, specific local attitude, really all information that Facebook provides to their customers, their advertisers, microtargeting. 

So I think the issue we're facing is that we are vulnerable as democratic societies exactly because we are open, right?  Exactly because RT ‑‑ not just RT ‑‑ other media organizations, other online accounts can easily enter our public discourse in a way that actually Western independent organizations cannot enter the public discourse of places like Russia.  That's been severely repressed or places like China, right? 

And so where we find ourselves today is how do we not, you know, shut down accounts, advocating that we should mandate that Twitter and Facebook shut down accounts, right?  It's not about censorship, right?  But how do we face the situation that there is pluralization in our society and that does make us vulnerable, right?  And how do we continue to maintain those values of openness and democratic values, freedom of expression and freedom of press while still allowing voices of dissent, right? 

I think the difference is between normal people, meaning voters who have opinions and views to get information out there, or tweeting about conspiracy theories, whatever, and then a state using that, trying to manipulate discourse around specific events, around elections, which we saw, around very divisive religious or cultural issues so we have to be careful about how we talk about this. 

>> PANELIST: I think the gentleman back there had something. 

>> PARTICIPANT: I don't think RT sponsored it.  I think it was a research agency.  And you actually cited a number of examples that were actual fake news.  Posing as fake groups.  Those are fake news, not dissent, posing as Islamists.  But a lot of those are separate from RT.  (?)

>> MATT CHESSEN: Any other comments on this?  Sir?  Yep. 

>> PARTICIPANT: Hi.  I'm from Japan.  And researching the online movement, and I came to think that people actually want fake news, you know, for feeling good or something.  So I seriously doubt that fact checking might work.  So I think monitoring fake news (?) Is much more political (?) Because those who want fake news are very politically motivated (?).  I really appreciate if you have thoughts about this kind of thing.  Thank you. 

>> PANELIST: I think you make an excellent point.  So a lot of the solutions that I've heard actually focus on two things.  One is providing people more information or context on the information that they're consuming.  So these would be things like fact checking, people have talked about reputation scores for media outlets.  And flagging things as disinformation. 

The second thing people talk about a lot is sort of this specific education piece where you're trying to educate people and help the citizenry become better, more savvy consumers of information.  I think people miss out on this third piece a lot.  And that is the emotional aspect.  A lot of people consume disinformation.  And please let's use disinformation rather than fake news.  I totally agree, fake news is not a great term.  Disinformation is much more precise on what we're talking about.  But people consume disinformation because it's emotionally pleasing for them.  It's not necessarily that they don't believe that it's true or that they believe it's true.  And so we really have to address that component of things.  Some people have said, you know, maybe we need to package truthful information together in a way that is more emotionally pleasing than the disinformation.  And so maybe that involves making an entertaining or integrating satire or humor into these types of things.  But that's a big piece that I haven't heard a lot of people talking about, that emotional component of these things. 

>> PARTICIPANT: Hi, I'm from the University if Mexico.  I wanted to comment and mention what I'll be working on.  I think it's also emotionally displeasing to find yourself being fooled.  And I think (?) It needs to happen, and people realize that ‑‑ they feel more wary about what they're seeing, and I think that process ‑‑ I experience it myself.  I found myself retweeting or sharing information that then turned out to be false.  Then I felt like oh, damn it, I don't want this to happen again.  Then I became more wary about the information I was sharing. 

In Mexico, in September, for example, there was this project that was spontaneous.  A lot of people wanted to help, and there were a lot of people that needed help, but there was information (?) Because of nefarious ‑‑ the chaos.  There was people that couldn't find information they needed timely.  And organizations and people organized themselves to verify information on the ground and then the feed of verified, updated information so we could connect with the help needed and the help that was being offered.  And research about what the effect of this was, but I think being there, there were very positive aspects of this.  Now we're trying to take kind of the same model to the elections are going to happen in Mexico next year.  And we are attempting to replicate in a more planned way and more dedicated resources to locate people on the ground and verify information (?) Information campaigns, for example, (?) To try to suppress votes.  As you said, create legitimacy (?) So when there's a need to put together an information campaign, it's already established is and people can recognize a trustworthy channel for information. 

I just wanted to share that the experience I had with the earthquake, we're trying to do action and we'll see next year. 

>> PANELIST: That's great.  Anyone else? 

>> ALINA POLYAKOVA: That point about the importance of the messenger.  A lot of these conversations about what do we do?  And what comes out very clearly is that the source of the truthful information needs to be a trusted source.  Then you get into the question of, well, how do you develop trust with the population?  Clearly we're in an area of declining trust.  So there's a very limited role that I think governments can play in this current, I think, environment with institutions and how they're trusted.  Which is why in many ways it has to be up to Civil Society to develop these tools that are embedded in local communities.  You know the resources behind them, right? 

I'm also very skeptical ‑‑ and I'd be curious to hear the group's thoughts on this.  The point that Matt brought up about these ideas of labeling and things like that.  You know, I don't know if this is an accurate number, but a number I saw about Facebook's server rate, for example, on the feed.  It's less than 1%, on average.  So people don't even read most of the information they share, right?  So how do you click through and figure out is this a legitimate source or not a legitimate source, right?  Even if you have some sort of consumer protection agency that is doing rankings for news outlets or media resources, (?) You know, a "C" or something like that.  But how do you get people to even pay attention to these things?  And I think it's very difficult exactly because they are looking ‑‑ the reason why people are consuming, it's infotainment, right?  Because you crave it.  You desire it.  It kind of makes you feel the same way in a Big Mac might make you feel even though it's bad for you.  So that's the crux and it's interesting hearing other folks talk about it. 

>> PANELIST: That's all on platforms where they are a curated feed.  Even those filters or algorithms aren't at play.  They were telling us a bit earlier about how whatsapp is used for disinformation in Uganda and Burundi.  If you wanted to share a little bit about whatsapp in East Africa. 

>> DONATIEN NIYONGENDAKO: Thank you.  In East Africa, the information ‑‑ news information is special (?) Political apartheid, (?) So on Facebook or Twitter, people create an account, second account similar like (?) And they publish information fake news from that account, and it is a human rights defender who is publishing that fake news.  So (?) Limit defenders to start an account.  Similar (?) Which is not good.  So it's good to have a network to collaborate on Twitter or Facebook to help human rights defenders to disable those accounts created, the fake accounts created by other people because (?) Election (?) In the country. 

>> PANELIST: Other comments?  Questions?

Ideas?  And I'd open that so far there haven't been any women who have spoken.  Ah, there's one. 

>> PARTICIPANT: Okay.  I'm from an organization that works with media and digital rights.  And I would like to ask a question regarding on how to respond in an effective way to the panic regarding fake news in elections.  We are facing elections in 2018, and Brazil is finding a solution.  Our electoral courts there formed a council with the Brazilian intelligence agency, the Army, and other, better ‑‑ other, better players, the Internet steering committee.  They should ‑‑ the Army and the Brazilian intelligence agency were called to give suggestions on how the electoral court should regulate the elections regarding fake news.  And they published a resolution on the 18th, so yesterday.  And it is related to content removal, within 24 hours.  (?) And things like that.  So on the one hand, we are talking about education.  We are talking about thinking of the emotional side of fake news and trying to find solutions that are not exactly short‑term solutions.  And on the other hand, we have states facing elections and trying to find very quick solutions that often are related to censorship and criminalizing online discourse.  So how to balance this panic on one side with short ‑‑ more short‑term solutions that are not censorship or other kind of things. 

>> PANELIST: Yeah, great question.  I'd throw that back out to the audience here.  Anyone who's navigated an election recently and has been wrestling with these problems that has any stories, successes or failures to share? 

[ Phone ringing ]

>> PARTICIPANT: Hello.  (?) Actually, I'm just an observer.  It was happening with Brexit, now with a referendum and general elections in the UK.  Actually, it is the government that must be ashamed of fake news actually affecting the general elections of referendum.  But we must also take into consideration that, for example, during the referendum ‑‑ I mean Brexit referendum, turnout was 72%, which actually was provided not by fake news, but the promises of some members of the government to make the people more prosperous.  It was much higher in 2016 rather than during general elections which is usually not more than 40%. 

And we also must understand that the audience, the people, actually are not so stupid to believe all the fake news.  There is the one gentleman that said if people believe if fake news, they just are willing to believe in fake news.  And it's actually the government who's responsible, they're not so trustful between ‑‑ under their constitutions there. 

>> PANELIST: Great.  And our virtual friend, Sam, wanted to come in.  So if everybody could hush and listen with your magic little earbuds. 

>> SAMUEL WOOLLEY: Hi, everybody.  I'm back.  Now am I really, really loud?  I'm either too quiet or too loud.  Sorry about that.  So, hey, I just want to say something here.  I think that the ‑‑ that one thing that we feed to talk about is the platforms themselves and what they're doing to address this issue.  I think a lot of the solutions have been focused upon putting the onus back upon Civil Society or journalists to vet information.  These groups are already limited in their abilities and their resources.  Whether or not we like it, we discussed censorship earlier.  Alina, you touched on this.  Whether or not we look, there's going to have to be some kind of regulation of what happens on these platforms.  Not only is there going to have to be, there will be and there already is.  And so I think that we need to get ahead of what the regulation looks like.  And we need to make sure that it's not so heavy‑handed that it drowns out the ability of activists to use these platforms for good means.  One thing that I've been working on is a platform for realtime detection of what we call inorganic disinformation campaigns. 

And what it does is helps people to report upon and understand what's going on during the elections rather than post‑talk, which is what we've kind of been doing.  Another thing, though, is that governments seem to be woefully unprepared to approach this issue.  And so another thing that needs to grow out of conversations like this but also the broader conversations that are taking place is Civil Society has got to continue to work with government to educate them and end ‑‑ all of us here at IGF have got to help them to understand what the problem actually is rather than taking a heavy‑handed approach, say getting rid of all bots on a website or something like that. 

>> CHRIS DOTEN: Great.  Thanks, Sam. 

>> PARTICIPANT: Yes.  One of the issues is an economic issue because it's very cheap to produce lies and to disseminate them through the Internet.  And it costs money to produce high‑quality news and information.  And that economic disparity is one of the things that people talk about as an area for intervention and regulation.  And I just wonder what people think about what could be done to disincentivize the dissemination of disinformation.  Because it really ‑‑ you know, as long as you can get money for that information, the factories that do it are just going to keep doing it.  If they're clever enough, they can get this stuff out there and get ads put on it.  So one of the solutions that has come up is the possibility of taxing companies that, you know, are taxing the platforms for systematically propagating that kind of disinformation on a regular basis when it's been proven that it was a problem. 

I don't know what else can be done.  Are there other ideas out there about this ‑‑ about how to handle that problem? 

>> CHRIS DOTEN: There's a gentleman back there with a response, I think. 

>> PARTICIPANT: Hello.  My name is Dan.  I'm from the Media Development Center.  And my comment is in line with what my predecessor just said.  I think we have two separate issues here.  One is the issue of production and informing the public for one's own political role or goal.  The other aspect, which is redistribution.  As you know, (?) There was young people, teenagers, used this opportunity to make a lot of money for Macedonia.  You know, circumstances, without actually going into who they are supporting or who they are against.  Mind you, most of their sources, what they republished were American sources.  So it was not Russia or China or anything. 

And second, according to their own testimonies, (?) Journalists from abroad, they first tried the same operation with standards.  (?) It just didn't bring enough money and attention on Facebook.  So it goes, I guess, to (?) That people wanted because it's funny, because it's entertaining.  Because they're pissed off at the world, if you want.  But also, they started it (?) And in a sense that they could help Trump's campaign because this is a small impoverished country, and this was a real source of income (?) In a short period of time.  Thank you. 

>> PANELIST: I think the economic side is very important to this.  But I think the solution is not necessarily ‑‑ I'm really troubled by some of these ideas of taxing disinformation.  Because that sounds a lot like censorship to me.  Who's deciding whether it's true or false?  And if you have a government doing it, especially, you know, the platforms can do what they want with their content and the platforms.  But you don't want governments making those types of decisions.  There are malicious ad networks that put out disinformation, specifically because they want people to click on it and leave Facebook or Twitter or some other platform, go to a website where they then put adware to basically be able to track those users on the Internet, right?  And they use and sell that data.  So they don't care what's in the disinformation.  They just try to get something that's emotionally pleasing to people so they click on it and leave the platform for their own website. 

So the question you have to ask them there is who's actually paying for the ads on those networks, and do those companies have a responsibility, then, to figure out where their ads are?  I think the private sector has sort of woken up a little bit to this in the last year, that some of their ads were being placed next to videos and content that they didn't want their ads associated with, which is (?) Speech.  I think it's a lot of little choices among private sector companies. 

Similarly how we've gotten used to not paying for news, that's devastated the news industry.  It's destroyed local news in the United States.  A lot of news organizations can't afford editorial staff or investigative journalism.  That's lowered the quality of news overall.  And so that makes people less trusting of news.  And then (?) Getting all the revenue.  So news has to look more like revenue ‑‑ or news has to look more like quick bait in order to get the revenue to survive. 

So one suggestion that someone mentioned was more people feed to be paying for news.  And I don't know if that's the right solution or not.  But I think a lot of the economic questions, you don't really want the heavy hand of government coming down on this.  I think it's the people who are actually putting the money into the system or not putting the money into the system that need to think about what they're incentivizing or disincentivizing. 

>> PANELIST: Great.  We're down to our last ten minutes.  There are a couple responses.  One there and then you over there. 

>> PARTICIPANT: On the issue of (?) I remember seeing research that was essentially saying that fake news is more or less the (?) Networks that the media have, in the majority of cases.  Now, an interesting parallel, I think, would be to look at how they've reacted to other forms of content.  I think most of (?) In the room, maybe courted by government and they decided to take action.  Now, that doesn't seem to be that sort of concerted action against fake news.  I'd like to hear more about why that's happening. 

>> PARTICIPANT: Just a quick response to the economics thing.  Emily Barrett called for fund, the public platform company put a billion dollars into investigative journalism as a short‑term fix.  And on the issue of production, just so we are making sure that we're preparing for the fake news ‑‑ or the disinformation of the future, not just the disinformation that we're accustomed to currently, I was at NIPS last weekend which is a big AI conference.  They were talking about presenting research on two different methods.  One of which could substitute someone's voice and their lips in a video just by using an audio clip of one minute of one speaking and sufficient video data as well.  And also there's now ‑‑ there's someone else that's changed the face, Goodeau's face, the factor, into a porno movie.  Which is basically the video will be another medium that would be very difficult to trust as real audio.  So we should prepare for that future as well. 

>> PANELIST: So you set me up perfectly because I published a paper on this.  And it looks at some of these emerging artificial intelligence technologies.  You know, these technologies in and of themselves are not malicious, but the way that they could be used, they could radically enhance the effectiveness of some of this computational propaganda.  So technologies like chat box, some of the dynamic content combined with some of this audio and video manipulation.  If you haven't seen some of the videos of some of the video manipulation, there's a great one where they take just a photograph of Barack Obama and an audio track and then they combine them together, and it actually generates a video of him speaking that audio track even though it's completely different than the original video.  And there's lots of examples of these types of things. 

Combined now with psychometric profiling and use that on top of a lot of the techniques that Sam Woolley has exposed in computational propaganda.  We're entering into an era where these techniques could be much more powerful than they are now.  And we're entering into this era where you may not be able to trust any audio and video because you're just going to have this pliable reality and these systems are increasingly going to be able to generate this dynamic content and shape narratives in realtime.  And so this is a real concern.  I've heard, you know, solutions for this that we need sort of digital signatures for audio and video.  That is a concern.  And the audio and video manipulation is probably one of the things that I'm most concerned about as far as emerging technologies. 

>> PANELIST: But we also need to educate those doing advanced communication and artificial intelligence research, and that's something we can mandate now. 

>> PANELIST: So the problem there, though, is that a lot of these technologies are dual use, right?  A lot of these technologies are directly relevant to the generation of audio and video, for example.  And so when you have these technologies ‑‑ and that's true for just about all of these AI technologies, right?  None of them are inherently malicious.  And so do you tell people not to work on things that the private sector is demanding for their own use when it could then be used for malicious purposes?  I would actually say that we need to focus more on the malicious actors, what they're actually doing and then the malicious effects.  The technology isn't the problem.  The technology can be used for good or ill. 

>> PANELIST: I don't think it's mutually exclusive. 

>> ALINA POLYAKOVA: One thing I don't hear a lot of people talk about, it goes back to Matt and Sam's points about the responsibility of the platforms and the fact that, you know, any actor can use the tools that are provided by Facebook or Twitter in a much more refined way than ever before to specific audiences, specific message to specific audiences.  And one thing that I've been really concerned about is Facebook's introduction in 2016 of the emotional responses to content.  When they went from likes to emoticons which opens up a Pandora's box.  Now you not just know what people like in terms of content, it also makes them angry or sad or happy.  And this is the kind of information that Facebook provides to advertisers.  And I think this is really, really troubling.  And we're not really talking about this.  And I think there needs to be some ‑‑ maybe it's a code of conduct that's modeled after what that exists in the EU today, focusing not on free speech but on disinformation and maybe ethics could be part of that as well, in addition to some of the kind of more embedded trading that you're referring to. 

But I think these are issues that I think we're not really talking about and that we should be talking about. 

>> PANELIST: And Sam Woolley had another comment that he wanted to wrap up with here. 

>> SAMUEL WOOLLEY: I think that looking ahead to future scenarios is a really great idea.  I'm glad that someone's brought that up.  I think that we have a lot of elections coming up in the following months.  And I really do think that people working in the countries where elections are happening need to coordinate both with their own governments but also with the platforms to figure out what the realtime solutions to this problem are going to be. 

I also think that as to Alina's comment on the platforms themselves, we feed to remember that the Trump campaign worked very closely with Twitter, Facebook and Google.  They were actually embedded in San Antonio with Trump Digital, helping them to make decisions about what was going to happen on the campaign and specifically on the advertising that the campaign was doing.  And so political communication on social media is now a big market for these companies.  And we need to figure out what they're allowing to happen and what they're not allowing to happen as far as actual campaigns go. 

>> PANELIST: Okay, great.  So regrettably, we are just about out of time here.  But any last remarks from any of these folks here? 

>> DONATIEN NIYONGENDAKO: The people who are active on social media, printing fake fuse are paid sometimes by government, and they want to keep their president in the government.  What is important for human rights defenders to stop fake news is to collaborate on social media to deliver or to stop disinformation on whatsapp or Facebook.  Also, fake news (?) Arrest people or activists or journalists during the oppression.  So human rights (?) Together with fake news and defend human rights. 

>> PANELIST: So very briefly, I think that, you know, this is a challenge to democracy.  And I firmly believe that democracy needs to evolve to deal with this problem.  But it's also a democratic solution.  So this is not something government's going to come in and solve.  The platforms aren't going to get together and solve this on their own.  It's going to involve solutions from a lot of different actors.  And I think that's the right solution.  We need a democratic response to this.  We feed to maintain our values.  We need to maintain our respect and our suspect for freedom of expression.  And that's the way we're going to get through this.  It's not by, you know, we can't recognize a challenge to democracy and then say, okay, well, we need less democracy, then.  We need more democracy. 

>> PANELIST: Great.  Well, thank you all very much for your time today.  This is something that to echo Sam's comments, this is something that we all need to be continuing to work on together at NBI, we're working on information integrity issue.  Anyone wrestling with these issues, we'd be interested in working with you and talking with you in the future.  Thank you for your time.  Thank you to Sarah Moulton for helping with the event and for hosting it.  So thank you very much. 

[ Applause ]

(The session ended at 3:50 p.m.)