Discerning Truth in Today's Information Ecosystem

Renee DiResta of the Stanford Internet Observatory — and 2017 Presidential Leadership Scholar — reports on how disinformation campaigns work and how consumers can fight back against disinformation.

Renee DiResta serves as the Technical Research Manager for the Stanford Internet Observatory. In her role, the 2017 Presidential Leadership Scholar studies state-sponsored disinformation campaigns and works to track down false narratives and information. A contributor to Wired, her investigative work has led her to testify before Congress and speak widely about the abuse of information technologies.

A graduate of the State University of New York-Stony Brook, DiResta spoke in late September with Lindsay Lloyd, the Bradford M. Freeman Director of the Human Freedom Initiative at the George W. Bush Institute, Chris Walsh, Senior Program Manager at the Bush Institute, and William McKenzie, Senior Editorial Advisor at the Bush Institute. She explained how disinformation campaigns actually work and where they originate, how social media companies are responding, and how the latest trend is of foreign actors amplifying false information from Americans. She also discusses tips that consumers of information can use to ferret out disinformation, while suggesting that the explosion in some new technologies may lead us to seek more personal human interaction.

Let’s start with disinformation, specifically from Russia. You have studied the work of actors there, such as the Internet Research Agency. Most Americans probably have no idea how these operations work, so could you explain how they do operate? And what is their intent?

Russia has two different actors running disinformation operations targeting the U.S., that we know of so far. The first is the Internet Research Agency, which is a bit like a social media agency and a bit like a marketing agency. It is staffed by young people. It does not report directly to the Kremlin. And it’s owned by a private individual, so they have plausible deniability if they are caught targeting another country.

You may have seen the memes of Hillary (Clinton) fighting Jesus, or inflammatory content that was pushed out to people who are members of particular racial, regional, or interest groups. The Internet Research Agency works to segment American society and inflame divisions between what it sees as those segments. And it does that on social platforms using meme content and troll accounts. They use fake people who communicate the information as if they’re one of your peers.

The [Russian] Internet Research Agency works to segment American society and inflame divisions between what it sees as those segments. And it does that on social platforms using meme content and troll accounts. They use fake people who communicate the information as if they’re one of your peers.

They also work to infiltrate the community and drive it to take action in the streets, mount protests, or take other actions that someone in Russia wants them to take. There is an element of agents of influence to these operations, not just social media manipulation.

The second actor Russia has operating is the GRU, or the GU, which is their military intelligence organization. They’re fairly ineffectual on social media, but they have extraordinarily strong teams that hack various entities.

In the 2016 election, that was the Clinton campaign. We’ve seen them go after international sporting bodies. They’ve gone after government figures in a range of countries. And they then leak that hacked material, often in tranches, so they can selectively steer the way the media in the targeted country is covering the documents.

They use fake personas, pretending to be hacktivists from whatever country they’re targeting, claiming they are locals exposing the wrongdoings of people in power. They also write long-form content, news articles bylined by fake people. If you’re reading your favorite blog and it has inadvertently accepted a contribution from one of these fake journalists, you’re reading Russian propaganda. This is a very old Cold War strategy. They’ve just adapted it to the internet era.

We hear mostly about Russia and a bit about China. But what other countries or governments are trying to get into this game? Would any of them surprise us in terms of being thought of as a U.S. partner or ally?

Not only are governments running these strategies, but so also are domestic activist groups across a very wide variety of countries. For example, the youth wing of a party in Pakistan carried out an operation that Stanford Internet Observatory examined. It is a free-for-all.

The ones that target the United States are the entities you would expect: Russia, China, Iran. But governments or political parties also target their own people, that’s a much broader set of entities.

One challenge in understanding disinformation campaigns is, to what extent can you attribute the activity to a government? That is very hard to do. We’ve seen operations run out of Saudi Arabia, Egypt, and numerous Middle East countries where they hire social media managers to push out narratives or content that serve the interests of the government. But it’s not a government-run operation like the GRU, which directly reports to the Kremlin. Things seem to be moving in the direction of the plausible deniability offered by contractor farms. That’s a trend we’ve seen worldwide at this point.

We’ve seen operations run out of Saudi Arabia, Egypt, and numerous Middle East countries where they hire social media managers to push out narratives or content that serve the interests of the government. It’s not a government-run operation like the GRU, which directly reports to the Kremlin. Things seem to be moving in the direction of the plausible deniability contractor farms.

How effectively have social media companies responded to this threat? What more could they be doing?

Back in 2017, when the first congressional tech hearings were held to get a sense of how much of this activity was happening on social platforms, there was almost no internal infrastructure that was directly responsible for looking at this. Facebook had a security team, but the security team wasn’t a propaganda-hunting team or a state-sponsored troll-hunting team. That didn’t exist at any platform.

Now there’s much more of an infrastructure internally. Most of the major companies have “integrity” teams whose job is to find this stuff on their platform. They generally do a once-a-month release where they’ll announce usually between two and four takedowns of what they call “coordinated inauthentic behavior.” That is inauthentic behavior in which an actor who isn’t what they seem is trying to push a message or narrative to some targeted group of people.

Fake accounts and front media organizations often will be involved. The social media platforms will investigate and release that information to outside researchers. My team at Stanford partners with the platforms in doing some of those investigations. Sometimes we’ll give a tip to them and they’ll do an investigation. If they in fact find something, we will receive back some of the information and jointly investigate it. We will look for related content on other platforms.

When the platforms do a takedown, they’ll similarly provide the information to outside researchers to write up an independent assessment of what they found. This collaborative process has been going on for maybe six months to a year.

This doesn’t mean all operations are quickly found. Sometimes, things reach a stage that has the potential for impact with hundreds of thousands, even millions of followers. But now we’re at the point where a lot of the operations are found when they have very few followers. They don’t have much opportunity to have a significant impact. We are doing better at catching things earlier.

We also have seen a rise in domestically generated disinformation or fake news. Are we starting to see an echo chamber where foreign actors are picking up and amplifying those domestic efforts and vice versa?

Yes, that is a remarkable evolution. Even in 2016, the Internet Research Agency’s early activities recognized that there were certain inflammatory media properties that got a lot of uptake from their audiences. One source they consistently pulled from was Turning Point USA memes. They would slap their own logo over them and push it back out as if their pages had created it. But here was authentic American content. They had this readymade content that had performed well with the audience that the Russians were targeting. So the trolls would just take it and amplify it.

A challenge for hostile actors running an operation is if they have front media properties or fake accounts. Once one of those is found, things often unravel quickly. But if you just amplify something, you can minimize the exposure of creating your own fake accounts. You can instead have accounts on Twitter that look like the voice of the people, and retweet the most sensational and polarizing content. They simply amplify it and fly under the radar.

How do we create a greater sense of media literacy so consumers of news in the United States can understand the source of their news?

Very few people search for a fact check if they see something that is emotionally resonant or fits their preconceived view of the world. There are debates among media literacy and techno-sociology scholars about what is reasonable to expect people to do. Do you want people to exist online in a perpetual state of skepticism, where they must fact-check everything?

One element of this is that the platforms need to do a better job of curating, so that more reputable sources are elevated. Or if a source has been found to be repeatedly false, that the fact checks are presented immediately alongside their posts. That way, there is a persistent correction that anyone who sees the content in the future can be made aware of.

Then, there’s the other part: helping audiences understand that some information is designed to rile you up. A lot of education needs to happen around the motivations of the actors in the information space and our psychological responses.

You mentioned that we can’t stop all the time and question everything we read. Otherwise, we would probably stop reading. But what two or three types of questions should we be asking ourselves as we read through information that we are not sure about?

One thing is, does this sound too outlandish to be real? I’ve been consistently surprised about how many wild stories go viral. I would first ask, who else is covering this? Did any reputable paper write this anywhere?

I am probably just a bit left-of-center, so I would see whether people slightly right-of-center were writing about this sensational claim too. What is the counter narrative? If you’re getting your news on social media, you’re seeing things that the platform has identified as something you will be receptive to. The information is filtered and curated based upon the platform’s perception of you.

Without that curation, there’s a concern that people would just be overwhelmed with so much information. So, the tech company feed-ranking algorithms will push up the things that they think you’re more likely to engage with and be entertained by, and go on to share.

Once you’re in that environment, you’re not going to see things that it thinks you will disagree with, or be angered by. Or see things that are for a political identity that you don’t have.

That’s unfortunate. We’re not all seeing the same content and stories. We’re definitely not seeing the same framing, but oftentimes we’re not even necessarily seeing the same facts. And that is the problem with the information ecosystem as it is today. Americans are having totally different conversations in their bubbles.

In earlier eras, there was a sense that most people were talking about the same thing. The question is, how do we as a democracy find those shared threads given that we’ve been placed into these polarized chambers? 

You recently wrote in the New York Times that we will need a strong, collaborative effort to stop any potential viral spread that claims the outcome of the November election was rigged. What did you mean by that?

My co-author and colleague, Alex Stamos, and I wanted to be clear that the biggest threat wasn’t Russia. Russia is one among many actors working to delegitimize the outcome of the election. Unfortunately, in the United States today, there is already plenty of preemptive delegitimizing of election results by hyper-partisan American voices. There are preemptive narratives about ballots, fraud, and voter suppression.

The net effect is that the public sees this coming from “blue check influencers.” A blue check on a social media platform means that they have verified your identity. Usually that means you have some fairly large amount of followers, or reach. When people who have the trust of many followers begin to express concern about the legitimacy of the election, or begin to amplify misleading stories, very large percentages of the population may begin to believe that.

This is a real problem and destroys democracy. The real concern is less about what Russia or China did or can do, and more about how the confluence of narratives, actors, and voices on social platforms coalesce to erode trust in the outcome of the election.

The real concern is less about what Russia or China did, and more about how the confluence of narratives, actors, and voices on social platforms coalesce to erode trust in the outcome of the election.

You also wrote recently in Wired that AI-generated text is the scariest, deep fake of them all. How so?

I’ve been able to play recently with this tool called GPT-3. It’s a generative text model put out by OpenAI. It is trained on the content of the open web and generates text. Based on a prompt, such as a word, it will predict the word that is most likely to come next. Everybody’s had this experience if they’ve used an auto-complete. You’re typing your email and you write, “Thank you for…” And then it auto-populates with the thing that you’re most likely to write, or the thing that most people are likely to write.

This generative text produces plausible, human-sounding content, and can do it at a tweet length or at a long-form essay length. And because it sounds like a human wrote it, it would be extremely hard for an ordinary person to detect.

Unlike deepfakes and other types of generative media, text generation is not going to be used in a wild, sensational way. It will be used to blanket the internet with commentary, comments from people. You can flood a comments section. You can run a lot of Twitter accounts quite effectively and easily. You can use machines to generate the persuasive text that disinformation campaigns or propagandists currently use human people to create.

There will be more of a pervasive unreality, as opposed to momentary punctuated scandals that a deepfake video would create. And this model offers an opportunity to reinforce opinions through persistent commentary and chatter on the message boards and social platforms that ordinary people are reading.

This means social media platforms have to get better at detecting inauthentic personas that will be used to push out disinformation. The content itself, the written material, will be harder to find and more pervasive in our information environment.

Let’s end on a positive note. We talk often about the threats that some new technologies like artificial intelligence create, including the spread of deep fakes or the text you just mentioned. But what opportunities might these new technologies present to improve democracies?

That’s a great question. There are a lot of opportunities for creativity, artistry, and culture, but I don’t know that generative text technology is particularly democracy-saving.

Machine-generated text, though, will present a new opportunity for hubs to pop up that prioritize real interactions between real people. People may want to be in an environment where they can guarantee that the person they’re speaking to is who they think they are. This perhaps could return us to more dialogue, more opportunity for people to connect with others, and more of a focus on shared humanity.

As cheesy as it sounds, I am a real person participating in this room or that forum, because I want to engage with other real people. I don’t want to deal with the mass outrage machine or the bots and trolls on social media. I want to find a place where I can find a community of real people to talk about and engage with real things.

So there is potentially a side-effect of moving toward more authentic community as these technologies come into play. Maybe our ability to produce the synthetic will make people more receptive to — and make them actively seek out — the real.