(Flora Thevoux/Getty/MatiasEnElMundo)

Why Facebook’s racism problem might never be solved

Recent tales of online abuse directed at marginalized groups epitomize tech's structural inability to address hate


Keith A. Spencer
August 5, 2017 3:00PM (UTC)

Last week, Ijeoma Oluo was in the middle of a road trip with her kids when her phone started to blow up. It was her Twitter notifications, signifying that people were interacting with one of her posts. Normally, this wouldn’t be cause for concern — as a popular writer with a huge online following, Oluo is used to having online conversations and debates. She had recently tweeted out a joke to her followers — nothing especially unusual. But something about this time was different.

“My Twitter app couldn’t keep up and kept crashing,” Oluo told Salon. Initially unbeknownst to Oluo, a benign tweet of hers had gotten picked up by racist trolls, who sensationalized her words to suit their own delusions, then advertised her online profiles to others to the world to spread hate. The trolls came in droves: first, to her Twitter, which luckily filtered out many of the worst comments, and then for her Facebook, which didn’t.

Advertisement:

“I’m trying to adjust from this place of relaxation and family, and then this is happening on the road,” Oluo said. “And then it felt kind of ridiculous and funny in a way, it was just so extreme. I’ve never gotten that much hate before. And I just couldn’t really be that upset anymore -- I couldn’t even process it.”

Dealing with the absurdity of it all, Oluo started reporting the racist and hateful tweets to moderators. “People call[ed] me a monkey, half-breed, cunt,” Oluo elaborated. As someone who often writes about race, this kind of abuse was boilerplate for Oluo — even if the sheer volume of it wasn’t. “I mean, I know I live in America and people hate me because I’m black,” she said. While trying to enjoy what was supposed to be a pleasant vacation, Oluo was playing Whac-a-Mole with trolls. “They have pretty clear rules, Twitter does — you can report things that violate the terms,” she told Salon. In all the accounts she reported, “Twitter either took their comments down or blocked the account. And with [Twitter’s] quality filter on, I couldn’t even see the comments. And for me that helps."

Ever optimistic, Oluo saw the verbal attacks as a chance to shed some light on the kind of everyday abuse marginalized people experience online. “This happens to people of color, transgender people, women . . . anything that the status quo doesn’t like, this is happening to them all the time,” Oluo said. So she screenshotted a couple of the hateful comments and posted them on her Facebook, a teachable moment.

A couple hours later, her Facebook account was suspended.

Oh, and what was the tweet that caused the storm of rage? Oluo and her family had stopped at a Cracker Barrel, the chain diner whose kitschy decor and branding romanticizes a whitewashed version of American history, and made a joke about it that would’ve drawn laughs in a stand-up set: “At Cracker Barrel 4 the 1st time. Looking at the sea of white folk in cowboy hats & wondering ‘will they let my black ass walk out of here?’”

* * *

Advertisement:

Oluo’s nightmare is, sadly, not uncommon. Anyone who writes about politics and race online can be a target of directed online hate; anyone who does so while being black, or a woman, or from another group that hasn’t achieved hegemony, risks exponentially more abuse. There is a disproportionate bravery to being a public figure as active online as Oluo -- every tweet and every Facebook post poses a risk, and the threat of abuse.

And yet, there is something maddening about having one’s account suspended amidst a pile-up against you. It’s like rewarding bullies for bad behavior, while punishing the meek. While it’s unclear why exactly Oluo was suspended — Facebook issued a boilerplate statement that merely said it was a “mistake” that they were trying to atone for — it could be that enough trolls cruelly reported her for abuse, a common silencing tactic that uses the algorithmic moderation in the opposite way it’s intended to work. This is reminiscent of what happened to Jezebel writer Lindy West in 2013. “That ‘report abuse’ button could easily be used against the people it's intended to protect,” West wrote. “When trolls created a fake Facebook profile for me [in] 2013, [and] I attempted to have it shut down, my genuine account wound up getting reported and suspended in retaliation. . . .  The thought of having my Twitter account potentially suspended by abusers in retaliation for fighting back against my own abuse is profoundly enraging.”

* * *

From a psychological perspective, as well as a design perspective, social media sites are pretty much designed to generate abuse. Psychologically, being anonymous or pseudonymous online encourages bad behavior — what is known as the “online disinhibition effect.” “The theory is that the moment you shed your identity the usual constraints on your behavior go, too,” wrote Maria Konnikova in a New Yorker essay about the psychology of comments. From a design perspective, social media sites are engineered on every level to encourage users to comment as often as possible, with no filter, as fast as they can and as much as they can — and to take pleasure in being over-sharers, so as to get the endorphin rush of another “like.” After all, this is how these companies make money: the more time we spend on these sites, the more our eyeballs stare at our screens, the more we are monetizable. As I’ve written before, there’s a contradiction between the social media profit motive — post about everything, constantly! — and the impulse to curb such behavior — don’t post some things! Social media companies are acutely aware of this: Twitter’s design staff, while fussing over its redesign, hemmed and hawed over how to keep the “compose tweet” button as prominent as possible. Upon opening Facebook, the first thing one sees is a beckoning canvas asking you to “share how your day went.” These social media companies are sending us two opposing messages at once.

Advertisement:

This perhaps explains why sites like Facebook and Twitter have been happy to harbor racists, Nazis and alt-right trolls, provided they stay in their own little hermetically sealed bubbles and don’t leave. To Facebook or Twitter’s advertisers, a racist’s eyeball is worth as much as anyone else’s. And while the platforms have certainly banned people for bad behavior, it’s apparent there’s a cost-benefit analysis going on here: do just enough to keep the racists at bay to keep your userbase from fleeing en masse, while minimizing having to hire more humans to deal with it.

It’s worth considering how much of an anomaly this is, this kind of overeager prodding to divulge that these sites encourage. Nowhere else in life or business are we encouraged not to think or edit ourselves before we act or speak. You’ll be hard-pressed to find a company that encourages its employees never to re-read a passage they’ve written, or an email they’re about to send. We spend many years in compulsory education learning to edit ourselves, learning social rules, figuring out the decorum that dictates grammar and speech, and having others grade and edit our work before it’s made public. And then we load up Twitter or Facebook, and that all goes out the window.

There’s another reason that Silicon Valley has evolved to breed online hate and bile, though, and it’s more political. The internet itself had its underpinnings in a socially libertarian fantasy. Prelapsarian hackers and their hippie computer-enthusiast comrades believed that the internet could be a kind of freeing platform, where you could be whomever you wanted, freed from the shackles of government control or oppressive social rules.

Advertisement:

Over time, as the internet became a big business, the CEOs of Silicon Valley companies pretty much all became economic libertarians too, befitting their privilege and position in society. (Those at the top are mostly white and mostly men, if you were wondering.) The business plans of many of the big Silicon Valley companies reflect a laissez-faire, self-regulating, individualistic view of society — predictably stripped of concern for how some might abuse others, or how structural oppression might play a role in one’s business model.

You see this pattern constantly, of startups beginning with a laissez-faire platform that gets big fast and comes to reproduce the same inequalities we see in society at large. To wit: Recall how AirBnB was taken off guard when a study revealed that hosts may be systemically racist. Or how about the time Uber noticed that its drivers were discriminating against black passengers and women? Or the discovery that iPods being sold on eBay that were shown being held by a black hand sold for 21 percent less than those depicted as being held by a white hand? Or how the anarchic online forum 4chan, with little in the way of control or moderation, inevitably became dominated by fascists and the alt-right? Or all the hundreds of thousands of times that Facebook and Twitter failed to protect pretty much any marginalized group from abuse?

Again and again this pattern plays itself out. Facebook and Twitter are a libertarian wet dream: self-regulating hives that encourage an unfettered, unfiltered individualism, and commodify every aspect of our identity into a monetizable data point. And again and again, they fail a huge number of users, generating abuse, hate, sexism and racism — in part because the politics of their leaders are baked into the companies they build. Silicon Valley epitomizes the structural shortcomings of libertarianism: the same inequalities we see every day are reproduced en masse when they land in a digital universe stripped of any regulation, governed by corporations who want only to profit off us.

Advertisement:

Keith A. Spencer

Keith A. Spencer is a senior editor for Salon. He manages Salon's science, tech, economy and health coverage. His book, "A People's History of Silicon Valley: How the Tech Industry Exploits Workers, Erodes Privacy and Undermines Democracy," was released in 2018. Follow him on Twitter at @keithspencer, or on Facebook here.

MORE FROM Keith A. SpencerFOLLOW keithspencer




Fearless journalism
in your inbox every day

Sign up for our free newsletter

• • •