How to spot a white supremacist on Twitter

It can be tricky monitoring hate tweets, but an algorithm developed by researchers in London could change all that

Published April 3, 2013 4:58PM (EDT)

                   (<a href='http://www.shutterstock.com/gallery-803866p1.html'>VLADGRIN</a> via <a href='http://www.shutterstock.com/'>Shutterstock</a>/Salon/Benjamin Wheelock)
(VLADGRIN via Shutterstock/Salon/Benjamin Wheelock)

This article originally appeared on The Daily Dot.

The Daily Dot For as long as the Internet has been commonplace, law enforcement agents have found it helpful in catching criminals (and not just the stupid ones who post video evidence of their crimes on YouTube, either). Extremist groups—such as the various American white-supremacist organizations—also find the Internet useful for connecting with like-minded people.

But it’s intensely time-consuming for police to personally wade through the web’s ever-growing number of white supremacist tweets and other social media postings in search of the relative handful of extremist posts indicating a possible threat.

Maybe there’s an easier way. Two researchers, J. M. Berger and Bill Strathearn, from the International Centre for the Study of Radicalization and Political Violence (ICSR) in London, have developed an algorithm with a high rate of success in identifying extremists on Twitter, by analyzing the relationships between Twitter account holders (as opposed to analyzing the actual posted content).

In a 56-page study (released in pdf form), Berger and Strathearn said:

“It is relatively easy to identify tens of thousands of social media users who have an interest in violent ideologies, but very difficult to figure out which users are worth watching. For students of extremist movements and those working to counter violent extremism online, deciphering the signal amid the noise can prove incredibly daunting.”

In other words, not every online follower of an extremist holds extreme views, and even among those who do, not everybody will act violently on them. Consider everybody who might follow a white nationalist Twitter feed, for example. Some will be cops, journalists and other non-racists, who follow for researching, proselytizing or even trolling the bigots. Some followers are self-proclaimed white nationalists who limit their bigotry to legal (though offensive) speech; they might say hurtful things, but won’t commit violent acts or otherwise endanger innocent people. 

But there’s a third group: those who don’t merely profess extremist beliefs but are willing to act violently upon them. If you’re in law enforcement, hoping to identify members of the third group before they hurt anybody, how can you do this?

Sure, you could read all those individual tweeted postings to determine who’s who—except that’s too time-consuming and labor-intensive to be remotely feasible. Berger and Strathearn’s algorithm can make those determinations mathematically.

"Our starting data centered on followers of 12 American white nationalist/white supremacist “seed” accounts on Twitter. We discovered that by limiting our analysis to interactions with this set, the metrics also identified the users who were highly engaged with extremist ideology."

Those 12 accounts had over 3,500 followers between them (who collectively generated over 340,000 different tweets), yet less than half of those followers publicly self-identified as white supremacists or white nationalists. Now suppose you’re stuck with the task of sifting through those thousands of followers and hundreds of thousands of tweets, to find which ones might be dangerous enough to warrant a closer look.

According to Berger and Strathearn:

"By measuring interactions alone—without analyzing user content related to the ideology—we narrowed the starting set down to 100 top-scoring accounts, of which 95 percent overtly self-identified as white nationalist. […] A comparison analysis run on followers of anarchist Twitter accounts suggests the methodology can be used without modification on any number of ideologies."

The researchers identified three key terms used in their algorithms, which they listed and defined as follows:

  • Influence: A metric measuring a Twitter user’s ability to publicly create and distribute content that is consumed, affirmed and redistributed by other users.

  • Exposure: A metric measuring a Twitter user’s tendency to publicly consume, affirm and redistribute content created by other users.

  • Interactivity: A metric measuring a Twitter user’s combined influence and exposure based on their public activity.

For example: suppose you’re a non-racist person following ex-Klansman David Duke on Twitter, and occasionally sending him a tweet disagreeing with his views. It’s highly unlikely any of Duke’s racist followers will find your comment worth re-tweeting. But a racist Duke follower who sends tweets reinforcing his white power views probably will inspire lots of retweets and conversations in the more bigoted regions of the Twitterverse.

Thus the algorithm focuses on the connections while paying no attention to the content. However, in the course of the study, the researchers nonetheless did discover certain trends regarding tweeted content; the study also lists which websites self-identified white nationalists were most likely to link to (and which white nationalist sites most often use Twitter as a self-promotion vehicle):

"The most-linked extremist site was WhiteResister.com, but more than half the links to that site originated with two Twitter accounts, both affiliated with the site. Discounting links from those two accounts, WhiteResister.com would have dropped to sixth."

The study briefly lists and discusses the most common hashtags used by extremists, before detailing the algorithm’s possible real-world applications in Countering Violent Extremism (CVE), and the importance of separating actual threats from mere Internet Tough Guys:

"In short, the vast majority of people taking part in extremist talk online are unimportant. They are casually involved, dabbling in extremism, and their rhetoric has a relatively minimal relationship to the spread of pernicious ideologies and their eventual metastasization into real-world violence. Any CVE program must begin by sifting the wheat from this chaff.


By Jennifer Abel

MORE FROM Jennifer Abel


Related Topics ------------------------------------------

Editor's Picks Extremism Hate Groups The Daily Dot Twitter White Supremacists