The Internet's bigot crisis: There's a new push to curtail online bigotry, but the toxic sludge of hate is too enormous to erase

It's impossible to combat online bigotry, which is spread through code words and implications more than slurs

By Amanda Marcotte

Senior Writer

Published June 1, 2016 11:59AM (EDT)

 (<a href='http://www.istockphoto.com/portfolio/eyetoeyePIX'>eyetoeyePIX</a> via <a href='http://www.istockphoto.com/'>iStock</a>)
(eyetoeyePIX via iStock)

For roughly the thousandth time, the masters of social media in Silicon Valley are promising to do something about online hate speech. Bloomberg reports that an impressive-sounding group of tech giants — Facebook, Twitter,Google and Microsoft — have "pledged to tackle online hate speech in less than 24 hours as part of a joint commitment with the European Union to combat the use of social media by terrorists."

This announcement comes with more pomp (the EU! combating terrorism!) than previous vows to start taking internet abuse seriously, but it's wise not to get too excited about our supposed future internet featuring 50% less racist vitriol. Users of these services have been subject to innumerable promises to do better, and invariably people find that the internet continues to be the toxic waste dump that it was before the promised clean-up. Skepticism isn't just warranted, but mandatory at this point.

Not that these companies haven't tried to do better. Twitter, for instance, partnered with Women, Action and Media! in 2014 to combat online misogyny and earlier this year, they unveiled an even bigger coalition, with about 40 groups, to help beat back online abuse. Facebook, too, has engaged in dialogue with groups fighting hate speech on it site and announced initiatives to combat extremist rhetoric that stokes bigotry and violence.

And there have definitely been some marked improvements. It's a harder to run a hate group on Facebook than it used to be, since it will often get reported and taken down. It used to be unheard of for Twitter to ban users, but now you're seeing some of the most mean-spirited harassers losing their Twitter handles.

But these moves are doing very little to stem the overwhelming tide of bilious hatred pouring out on social media. As the Bloomberg story reports, a French Jewish youth group, UEJF, decided to monitor how well Twitter, Facebook, and Google were doing in response to complaints about hate speech on their social media platforms. What they found was disheartening.

"In the course of about six weeks in April and May, members of French anti-discrimination groups flagged unambiguous hate speech that they said promoted racism, homophobia or anti-Semitism," the Bloomberg piece explains. "More than 90 percent of the posts pointed out to Twitter and YouTube remained online within 15 days on average following requests for removal, according to the study by UEJF, SOS Racisme and SOS Homophobie."

Dealing with bigotry online is very much like trying to bail out a leaking boat. Ban someone on Twitter for harassing people, and odds are that they will simply start another account under a new name and start right back up again.

There's also a volume problem.  Social media giants like Facebook already employ an army of overworked and underpaid moderators who toil endlessly to remove porn, violent images and other extreme content off their sites. Often these people work from foreign countries, like the Philippines. Just keeping people from seeing porn or photos of gruesome accidents is an unbelievable amount of work. Removing all the racism and sexism, which are in abundance and often context-dependent, would expand that workload exponentially.

The impulse, then, is to look to a tech solution for the problem. But while one can create bots that trawl for certain words or images and flagging content for moderators to remove, the bigots quickly figure out how the bots work, and invent new words and images to convey the same concepts.

This is one reason, no doubt, that the racial slur "dindu" and images of the Pepe the Frog figure have become so popular in white supremacist circles online. These words and images are new and therefore not well known outside of white supremacist circles, making it that much harder for moderation software and even human beings to interpret them correctly as hate speech and flag them for removal. Should the public start recognizing these signposts as well as they do the N-word or more famous racist imagery, white supremacists will just come up with some new racial slurs and images to communicate their ideas while avoiding moderators.

But even if the tools for dealing with online hate speech become sharper and swifter, the sad fact of the matter is it can only cut away a small fraction of the bigotry that social media has enabled to pour out of people's keyboards. The truth is that, as with the world outside of social media, most bigots have learned to communicate their hateful beliefs in terms that are just ambiguous enough that they can argue, when called out on it, that they aren't really racist/sexist/homophobic.

Banning overt white supremacists who fling racial slurs around would be good, of course, but odds are high that Twitter isn't going to take away, let's say, Donald Trump's account. The man is clearly a racist, but he knows well enough to push his racist notions through implication or to layer them with plausible deniability, rather than resorting to crude racial slurs that justify the immediate banning of his account.

This matters, because someone like Trump is a much stronger conduit for racist sentiment than a bunch of skinheads yelling racial slurs on Twitter. And he's really on the more overt end of things. Most bigots prefer to communicate through a wink and a nudge rather than shouting the N-word off the rooftops or calling women the C-word to their faces.

Take, for instance, this horror story about a gorilla at the Cincinnati Zoo who was shot after he got hold of a little boy who had slipped away from his mother and snuck into the cage. Even though anyone who has ever spent five minutes around a child can tell you what slippery little bastards they can be, thousands online rose up to denounce the mother as a bad parent and demand she be put in prison for her supposed neglect. Fox & Friends even did a segment about it, where they made a fuss over the boy's father's criminal record, even though the father wasn't even there at the time.

This story is sad, but it's also a reminder of how most online racism works and why it's such a tricky beast to manage. The parents in this case are black, and the whole online (and now mainstream media) outrage is a fairly obvious case of a bunch of mostly white people exploiting this horrible tragedy to push racist notions about the unfitness of black parents. ("At times, the barrage of insults were racially charged,"  the Washington Post delicately puts it.) But since the racism is disguised not as hate but as noble intentions — the love of animals, parental self-righteousness — good luck to any social media platform that wants to flag and remove all the bad faith caterwauling on the grounds that it's hate speech.

That, at the end of the day, is the biggest problem of all: Even if you could scrub the internet of all incidents of bigoted slurs, that would still only be a tiny smidgen of the bigotry being pushed online. Most bigotry comes disguised as something else, precisely to avoid the kind of social opprobrium that comes along with overt bigotry. But that doesn't make it any less damaging than the overt stuff. It might be even worse, since the subtle stuff is more widespread and far more insidious.


By Amanda Marcotte

Amanda Marcotte is a senior politics writer at Salon and the author of "Troll Nation: How The Right Became Trump-Worshipping Monsters Set On Rat-F*cking Liberals, America, and Truth Itself." Follow her on Twitter @AmandaMarcotte and sign up for her biweekly politics newsletter, Standing Room Only.

MORE FROM Amanda Marcotte


Related Topics ------------------------------------------

Free Speech Online Abuse Online Bigotry Online Harassment Online Misogyny Online Racism