Researchers have found a simple way to reduce online hate speech

The ability of social media to curb hate speech is debated — but a new study finds it's not that hard

By Matthew Rozsa

Staff Writer

Published November 22, 2021 7:22PM (EST)

Logo of US social network Twitter on a smartphone screen. (Kirill KUDRYAVTSEV / AFP) (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)
Logo of US social network Twitter on a smartphone screen. (Kirill KUDRYAVTSEV / AFP) (Photo by KIRILL KUDRYAVTSEV/AFP via Getty Images)

Online hate speech has been a problem since the earliest days of the internet, though in recent years the problem has metastasized. In the past five years, we have seen social media fuel far-right extremism, and more recently, galvanize Trump supporters to attempt a violent coup after he lost in 2020, based on misinformation largely spread online.

The January 6th riots prompted Twitter and other social media platforms to suspend Trump's accounts, as the companies faced inquires over how they planned to balance protecting people from harm and propaganda with maintaining a culture that promotes free expression.

Given the limitations of digital platforms, it is reasonable to be skeptical of such efforts. After all, social media profits off of our communications, regardless of their nature; and social media companies lack some power in that they generally do not produce their own media, but rather collate and curate the words of others.

Yet a new study by New York University researchers found that a relatively simple move on behalf of social media site could have a huge impact on the effect and spread of hate speech. Their study involved sending alert messages online to Twitter users who had been posting tweets that constituted hate speech.

Published in the scholarly journal Perspectives on Politics, the study explored if alerting users that they were at risk of being held accountable could reduce the spread of hate speech. Researchers based their definition of "hateful language" on a dictionary of racial and sexual slurs. Then, after identifying 4,300 Twitter users who followed accounts that had been suspended for posting language defined as hateful, the researchers sent warning tweets from their own accounts which (though phrased in slightly varying ways) let them know that "the user [@account] you follow was suspended, and I suspect that this was because of hateful language." A separate control group received no messages at all.

The study was conducted during a week in July 2020, amid Black Lives Matter protests and the COVID-19 pandemic — and thus when there was a significant amount of hate speech directed against Black and Asian Americans online.

The result? Twitter users who received a warning reduced the number of hate-speech tweets that they tweeted by up to 10 percent over the following week; if the warning messages were worded politely, users would do so by up to 15 or 20 percent. The study found that people were more likely to reduce their use of hateful language if the tweeter gave off a sense of authority. Since there was no significant reduction within the control group, this suggests that people will modify their bad behavior if they are told they may be held accountable, and will be more likely to view a warning as legitimate if it comes from someone who is credible and polite.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


Researchers added that these numbers are likely underestimated. The accounts used by the researchers had at least 100 followers, which lent them a limited amount of credibility. Future experiments could see how things change if accounts with more followers, or Twitter employees themselves, get involved. 

"We suspect as well that these are conservative estimates, in the sense that increasing the number of followers that our account had could lead to even higher effects," the authors wrote, citing other studies.

Unfortunately, one month after the warnings were issued, they had lost their impact. The tweeters went right back to tweeting hate speech at similar ratios as before the experiment started.

"Part of the motivation for this paper that led to the development of the research design was trying to think about whether there are options besides simply banning people or kicking them off of platforms," New York University Professor Joshua A. Tucker, who co-authored the paper, told Salon. "There are a lot of concerns that if you kick people off of platforms for periods of time, they may go elsewhere. They may continue to use hateful language and other content, or they may come back and be even more angry about it. I think in some sense, this was motivated by the idea of thinking about the range of options that are here to reduce the overall level of hate toward each other on these platforms."

Though social media sites operate as though they are free spaces for expression, curation of content on private platforms is not prohibited by the First Amendment. Indeed, the Constitution only prohibits the government from punishing people for the ways in which they express themselves. Private companies have a right to enforce speech codes on both their employees and customers. 


By Matthew Rozsa

Matthew Rozsa is a staff writer at Salon. He received a Master's Degree in History from Rutgers-Newark in 2012 and was awarded a science journalism fellowship from the Metcalf Institute in 2022.

MORE FROM Matthew Rozsa


Related Topics ------------------------------------------

Furthering Hate Speech Psychology Social Media Twitter