QAnon content "evaporated" online following post-Jan. 6 social media crackdown, study finds

"Mainstream platforms can have a very big effect on marginalizing or eliminating toxic content," one expert said

Published May 27, 2021 5:40AM (EDT)

A man holds a large "Q" sign while waiting in line on to see President Donald J. Trump at a 2018 rally in Pennsylvania. (Getty Images)
A man holds a large "Q" sign while waiting in line on to see President Donald J. Trump at a 2018 rally in Pennsylvania. (Getty Images)

New social media policies meant to limit the spread of QAnon conspiracies online appear to be working, new research shows. 

The study, conducted by the Atlantic Council's Digital Forensics Lab, found that QAnon conspiracy-related phrases "evaporated" from both mainstream and alternative social media sites, like Parler and Gab, following high-profile moderation efforts from companies like Facebook, Google and Twitter. 

In the place of a widespread Q following that grew to incredible size during the Trump presidency — a number of the Jan. 6 rioters who breached the U.S. Capitol were QAnon believers — the movement is now "a cluster of loosely connected conspiracy theory-driven movements that advocate many of the same false claims without the hallmark linguistic stylings that defined QAnon communities during their years of growth," according to the researchers, Jared Holt and Max Rizzuto.

They analyzed more than 40 million mentions of 13 widely known QAnon catchphrases and related language, including "WWG1WGA" (Where we go one we go all), "the storm," "great awakening," "trust the plan," "save the children," "Pizzagate." Their usage began in earnest last March as the COVID-19 pandemic first barrelled through the U.S. and peaked during last summer's racial justice protests — spiking again before Jan. 6 and dropping precipitously in the days following the insurrection, presumably due to the moderation changes at major social media firms.

For example, Twitter told CBS News in March that it had banned as many as 150,000 accounts for promoting Qanon conspiracies since January. Axios reported around the same time that YouTube had removed 30,000 videos promoting similar content, while its parent company, Google, banned ads on its platforms referencing Jan. 6 or even the 2020 election. Facebook implemented a fact-checking program to place warnings on posts with false or misleading information — though to what extent the warnings were used on Qanon content is still unclear. Last year, Facebook did announce a wide-ranging initiative to ban Q-related accounts.

Holt and Rizzuto do note some alternative explanations for the drop in QAnon-related content following the Capitol riot, including an extended silence from Q, who inspired the original conspiracy, self-censorship of well-known phrases in order to evade social media moderation and the dispiriting impact of Trump's loss on Q followers, who were some of the former president's biggest fans.

Perhaps most surprising were the downstream effects of mainstream social media moderation on alternative sites with little to no oversight — researchers concluded that more right-wing friendly Parler and Gab did not absorb the displaced Qanon activity from banned users of the more mainstream sites.

"Concerted content moderation works," Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, told Axios Wednesday about the study. "When they put their minds to it, the mainstream platforms can have a very big effect on marginalizing or eliminating toxic content."


By Brett Bachman

Brett Bachman was the Nights/Weekend Editor at Salon.

MORE FROM Brett Bachman