Can taking down websites really stop terrorists and hate groups?

Having an online presence helps hate groups stay active, but efforts to deny them one wouldn't be so successful

Published September 19, 2017 6:00AM (EDT)

A Ku Klux Klan demonstration at the state house building on July 18, 2015 in Columbia, South Carolina.  (Getty/John Moore)
A Ku Klux Klan demonstration at the state house building on July 18, 2015 in Columbia, South Carolina. (Getty/John Moore)

This piece originally appeared on The Conversation.

In the wake of an explosion in London on September 15, President Trump called for cutting off extremists’ access to the internet.

Racists and terrorists, and many other extremists, have used the internet for decades and adapted as technology evolved, shifting from text-only discussion forums to elaborate and interactive websites, custom-built secure messaging systems and even entire social media platforms.

Our research has examined various online communities populated by radical and extremist groups. And two of us were on the team that created the U.S. Extremist Crime Database, an open-source database helping scholars better understand the criminal behaviors of jihadi, far-right and far-left extremists. Analysis of that data demonstrates that having an online presence appears to help hate groups stay active over time. (One of the oldest far-right group forums, Stormfront, has been online in some form since the early 1990s.)

But recent efforts to deny these groups online platforms will not kick hate groups, nor hate speech, off the web. In fact, some scholars theorize that attempts to shut down hate speech online may cause a backlash, worsening the problem and making hate groups more attractive to marginalized and stigmatized people, groups and movements.

Fighting an impossible battle

Like regular individuals and corporations, extremist groups use social media and the internet. But there have been few concerted efforts to eliminate their presence from online spaces. For years, Cloudflare, a company that provides technical services and protection against online attacks, has been a key provider for far-right groups and jihadists, withstanding harsh criticism.

The company refused to act until a few days after the violence in Charlottesville. As outrage built around the events and groups involved, pressure mounted on companies providing internet services to the Daily Stormer, a major hate site whose members helped organize the demonstrations that turned fatal. As other service providers stopped working with the site, Cloudflare CEO Matthew Prince emailed his staff that he “woke up … in a bad mood and decided to kick them off the internet.”

It may seem like a good first step to limit hate groups’ online activity — thereby keeping potential supporters from learning about them and deciding to participate. And a company’s decision may demonstrate to other customers its willingness to take hard stances against hate speech.

But that decision can cause problems: Prince criticized his own role, saying, “No one should have that power” to decide who should and shouldn’t be able to be online. And he made clear that the move was not a signal of a new company policy.

Further, as a sheer practical matter, the distributed global nature of the internet means no group can be kept offline entirely. All manner of extremist groups have online operations — and despite efforts by mainstream sites like Facebook and Twitter, they are still able to recruit people to far-right groups and the jihadist movement. Even the Daily Stormer itself has managed to remain online after being booted from the mainstream internet, finding new life as a site on the dark web.

Drawing attention

Efforts to knock extremists offline may also have counterproductive results, helping the targeted groups recruit and radicalize new members. The fact that their websites have been taken down can become a badge of honor for those who are blocked or removed. For instance, Twitter users affiliated with IS who were blocked or banned at one point are often able to reactivate their accounts and use their experience as a demonstration of their commitment.

When a particular site is under fire, people who hold similar beliefs may be drawn to support the group, finding themselves motivated by a perceived opportunity to express views that are opposed by socially powerful companies or organization. In fact, radicalization scholars have found that some extremist groups actively seek out harsh penalties from criminal justice agencies and governments, in an effort to exploit perceived overreactions for a public relations advantage that also aids their recruitment efforts.

Relations between tech companies and police

Internet companies’ decisions about online expression also affect the difficult relationship between the technology industry and law enforcement. There are, for example, many examples of cooperation between web hosting providers and police investigating child pornography or other crimes. But policies and practices vary widely and can depend on the circumstances of the crime or the nature of the police request.

For example, Apple refused to help the FBI retrieve information from an iPhone used by a man who shot 14 people in San Bernardino, California, in 2015. The company said it wanted to avoid setting a precedent that could put its customers at risk of intrusive or unfair investigations in the future. And Apple has since substantially increased its protections for data stored on its devices.

All of this suggests the tech industry, law enforcement and policymakers must develop a more measured and coordinated approach to the removal of extremist and terrorist content online. Tech companies may intend to be creating a safer and more inclusive environment for users — but they may actually encourage radicalization and simultaneously create precedents for removing content in the face of public outcry, regardless of legal or moral obligations.

The ConversationTo date, these concerns have arisen suddenly and briefly only in the wake of specific events, like 9/11 or Charlottesville. And while opponents may shut down one or more hate sites, the site will likely pop back up elsewhere, maybe even stronger. The only way to really eliminate this kind of online content is to decrease the number of people who support it.

Thomas Holt, Associate Professor of Criminal Justice, Michigan State University; Joshua D. Freilich, Professor of Criminal Justice, City University of New York, and Steven Chermak, Professor of Criminal Justice, Michigan State University

 


By Thomas Holt

MORE FROM Thomas Holt

By Joshua D. Freilich

MORE FROM Joshua D. Freilich

By Steven Chermak

MORE FROM Steven Chermak


Related Topics ------------------------------------------

Dark Web Extremism Hate Groups Hate Speech Internet Internet Access Terrorism Terrorists The Conversation