BOOK EXCERPT

The "novelty hypothesis" explains how — and why — people fall for fake news bots

"Social bots" are spreading fake news — and humans are falling for them. Here's how

Published September 27, 2020 2:00PM (EDT)

 (Getty Images/Random House)
(Getty Images/Random House)

Adapted from "THE HYPE MACHINE: How Social Media Disrupts Our Elections, Our Economy, and Our Health—and How We Must Adapt" by Sinan Aral. Copyright © 2020 by Sinan Aral. Published by Currency, an imprint of Random House, a division of Penguin Random House LLC. All Rights Reserved.

Social bots, meaning software-controlled social media profilesare a big part of how fake news spreads online and the way they are used to spread lies is both disturbing and fascinating. But bots are only part of the story. In reality, fake news spreads through a fascinating symbiosis between bots and humans. If we only focus on bots, we'll miss the bigger picture and our own role in the spread of misinformation. Understanding our contributions to this symbiotic process is essential to fighting fake news.

In 2018 my friend and colleague Filippo Menczer at Indiana University, along with his colleagues Chengcheng Shao, Giovanni Ciampaglia, Onur Varol, Kai-Cheng Yang, and Alessandro Flammini, published the largest-ever study on how social bots spread fake news. They analyzed 14 million tweets spreading 400,000 articles on Twitter in 2016 and 2017. Their work corroborated the finding, from our ten-year study of misinformation on Twitter, that fake news was more viral than real news.  They also found that bots played a big role in spreading content from low-credibility sources. But the way bots worked to amplify fake news was surprising, and it highlights the sophistication with which they are programmed to prey on the Hype Machine, the ever-expanding social media ecosystem that has blanketed the planet over the last ten years.

First, bots pounce on fake news in the first few seconds after it's published, and they retweet it broadly. That's how they're designed. And the initial spreaders of a fake news article are much more likely to be bots than humans.

What happens next validates the effectiveness of this strategy, because humans do most of the retweeting. The early tweeting activity by bots triggers a disproportionate amount of human engagement, creating cascades of fake news triggered by bots but propagated by humans through the Hype Machine's network.

Second, bots mention influential humans incessantly. If they can get an influential human to retweet fake news, it simultaneously amplifies and legitimizes it. Menczer and his colleagues point to an example in their data in which a single bot mentioned @realDonaldTrump (the president's Twitter handle) nineteen times, linking to the false news claim that millions of votes were cast by illegal immigrants in the 2016 presidential election. The strategy works when influential people are fooled into sharing the content. Donald Trump, for example, has on a number of occasions shared content from known bots, legitimizing their content and spreading their misinformation widely in the Twitter network. It was Trump who adopted the false claim that millions of illegal immigrants voted in the 2016 presidential election as an official talking point.

But bots can't spread fake news without people. In our ten-year study with Twitter, we found that it was humans, more than bots, that helped make false rumors spread faster and more broadly than the truth. In their study from 2016 to 2017, Menczer and his colleagues also found that humans, not bots, were the most critical spreaders of fake news in the Twitter network. In the end, humans and machines play symbiotic roles in the spread of falsity: bots manipulate humans to share fake news, and humans spread it on through the Hype Machine. 

Misleading humans is the ultimate goal of any misinformation campaign. It's humans who vote, protest, boycott products, and decide whether to vaccinate their kids. These deeply human decisions are the very object of fake news manipulation. Bots are just a vehicle to achieve an end. But if humans are the objects of fake news campaigns, and if they are so critical to their spread, why are we so attracted to fake news? And why do we share it? 

One explanation is what Soroush Vosoughi, Deb Roy, and I called the novelty hypothesis. Novelty attracts human attention because it is surprising and emotionally arousing. It updates our understanding of the world. It encourages sharing because it confers social status on the sharer, who is seen as someone who is "in the know" or who has access to "inside information." Knowing that, we tested whether false news was more novel than the truth in the ten years of Twitter data we studied. We also examined whether Twitter users were more likely to retweet information that seemed to be more novel.

To assess novelty, we looked at users who shared true and false rumors and compared the content of rumor tweets to the content of all the tweets the users were exposed to in the sixty days prior to their decision to retweet a rumor. Our findings were consistent across multiple measures of novelty: false news was indeed more novel than the truth, and people were more likely to share novel information. This makes sense in the context of the "attention economy". In the context of competing social media memes, novelty attracts our scarce attention and motivates our consumption and sharing behaviors online. 

Although false rumors were more novel than true rumors in our study, users may not have perceived them as such. So to further test our novelty hypothesis, we assessed users' perceptions of true and false rumors by comparing the emotions they expressed in their replies to these rumors. We found that false rumors inspired more surprise and disgust, corroborating the novelty hypothesis, while the truth inspired more sadness, anticipation, joy, and trust. These emotions shed light on what inspires people to share false news beyond its novelty. To understand the mechanisms underlying the spread of fake news, we have to also consider humans' susceptibility to it.

My friend and colleague David Rand at MIT teamed up with Gordon Pennycook to study what types of people were better able to recognize false news. They measured how cognitively reflective people were using a standard cognitive reflection task and then asked them whether they believed a series of true and false news stories. They found people who were more reflective were better able to tell truth from falsity and to recognize overly partisan coverage of true events. Research also shows that nudges to be reflective and veracity labels can help slow the spread of fake news.

These insights can help us stem the spread of fake news online. In addition to using algorithms to identify and disrupt bot networks, we should focus on the human side of the equation. Social media platforms should nudge humans to be more reflective, for example by asking them, from time to time, whether they think a particular headline is true or false. These prompts to be reflective have been shown to reduce our belief in and willingness to share false news. The platforms can then aggregate the answers to these labeling tasks and combine them with algorithms and employees tasked with labeling fake news to scale labeling across the social media ecosystem. The platforms should also de-monetize known falsity by banning ads shown next to false content, demote false content in search results and put limits on reshares (as WhatsApp did to reduce the spread of coronavirus misinformation), while public and private education emphasizes media literacy and critical thinking.

To win the war against fake news, we can't rely only on defeating the bots. We have to win the hearts and minds of the humans too.


By Sinan Aral

Sinan Aral is the author of "THE HYPE MACHINE: How Social Media Disrupts Our Elections, Our Economy, and Our Health—and How We Must Adapt." Aral is the David Austin Professor of Management, Marketing, IT, and Data Science at MIT; director of the MIT Initiative on the Digital Economy; and head of MIT’s Social Analytics Lab. He is an active entrepreneur and venture capitalist who served as chief scientist at several startups; co-founded Manifest Capital, a VC fund that grows startups into the Hype Machine; and worked closely with Facebook, Yahoo, Twitter, LinkedIn, Snapchat, WeChat, and The New York Times, among other companies. He currently serves on the advisory boards of the Alan Turing Institute, the British National Institute for Data Science in London, the Centre for Responsible Media Technology and Innovation in Norway, and C6 Bank, Brazil’s first all-digital bank.

 

MORE FROM Sinan Aral


Related Topics ------------------------------------------

Book Excerpts Editor's Picks Election Fake News Bots Misinformation Social Media Twitter