Fake news and online harassment are more than social media byproducts — they're powerful profit drivers

Banning fake-news sites doesn't address the real problem: Social-media companies make big money off lies and hate

Published December 17, 2016 3:00PM (EST)

 (AP/Jose Luis Magana/Salon)
(AP/Jose Luis Magana/Salon)

Fake news is being tied to everything from the influence of Russian troll farms on the presidential election to an armed man’s invasion of a Washington, D.C., restaurant as the ludicrous but terrifying culmination of an incident known as Pizzagate. Fake news isn’t just dangerous because it distorts public understanding but, as in the case of Pizzagate, or Gamergate before that, because it is frequently implicated in targeted online harassment and threats.

Most media commentary about this issue centers on three primary areas: the nature of the “truth,” the responsibilities of social media companies to the public good, and the question of why people believe outrageous and unverified claims. Very little has been said, however, about a critical factor in the spread of fake news and harassment: They are powerful drivers of profit.

Lies, conspiracies, threats and harassment generate a great deal of money for everyone from teenagers in Macedonia to executives in Silicon Valley. Recognition of this fact led in November to Google and Facebook, and later Reddit, announcing that they would ban sites identified as “fake news” outlets from using their ad networks, thereby cutting off a profit motive. While this step might cut off funds to a relatively small handful of ever-changing platforms, it does not address the vast bulk of the fake-news economy.

Fake stories and harassment have a point of origin, but the real problem lies elsewhere -- in the network effects of user-generated content, and the engagement it drives. Engagement, not content, – good or bad, true or false -- is what generates Internet revenues and profit. So in that sense it makes no difference whether the content is "good" or "bad," true or false. Our posting, sharing, commenting, liking and tweeting  produces behavioral and demographic data that is then packaged and sold, repackaged and resold. In this economy, one that cuts across platforms, hateful or false representations are as easily converted into analytical, behavioral and ad-sales products as truthful or compassionate ones. Indeed, they are probably more lucrative.

Pizzagate is the perfect example. The delusional fantasy that Hillary Clinton and her campaign manager, John Podesta, were involved in a child sex-trafficking network run by global elites out of a neighborhood restaurant may have had its origins in political smears and propaganda, and was initially shared by “fake news” platforms. But most people’s exposure to this “news” came from user-generated content, tied in turn to revenue generation.

If you search for “Pizzagate” on YouTube right now, you will find more than 180,000 videos. People are uploading new videos by the hour, and hundreds of thousands of people are viewing them. They're also viewing and clicking on the ads that almost always appear in and alongside them.

A banner ad for Toyota, for example, appeared in a video called “PizzaGate is 100% Real, Why is Media Lying?” when I watched it. That video has more than 87,000 views. The channel hosting it had 118,175 subscribers, as of my visit, and had produced more than two dozen Pizzagate-related videos in the previous two weeks.

An ad for Grammarly, the spellchecking and grammar app and extension, was front and center on another one, “YouTube is Deleting My #PizzaGate Videos Without My Permission.” One of the most popular in this collection, with more than 273,000 views, starts with an ad for "Assassin's Creed," 20th Century Fox's upcoming action-adventure film based on a hugely popular video game series.

All a person anywhere in the world has to do to start making money on YouTube is click a button agreeing that Google can sell advertising attached to their videos. Advertisers pay the network after viewers see at least a portion of an ad, which is why ads often appear before the content. Google retains 45 percent of all ad revenue generated by user videos, regardless of what those videos are about.

At roughly $1.50 in revenue per 1,000 views, making a lot of money takes a lot of work and millions of views. While it’s hard to rack up serious money, some people do. YouTube analytics site Statsheep estimates that with more than 49 million subscribers and 13 billion views, the top-rated YouTube Channel, PewDiePie, has potential earnings of $34 million. For the average YouTuber, making huge sums of money is extremely unlikely. But that doesn’t matter to YouTube, which has, in the aggregate, more than 4 billion total views a day.  The company does not release details of its earnings, but analysts estimated that in 2016 YouTube revenues represented approximately 15 percent of Google’s earnings, or $77 billion.

YouTube is a relatively simple example because you can see ads directly on videos. But the best way to grow an audience is to share links or memes as profusely as possible across multiple platforms. On Twitter, for example, fake news and sustained episodes of online harassment can create hundreds of millions of tweets that become part of what is called the "firehose," a stream of public data comprising up to 500 million tweets each day.

The #Pizzagate hashtag, for example, has generated hundreds of thousands of tweets and retweets. This engagement data is what Twitter, which earns 85 percent or more of its revenues from advertising, has to sell whether via targeted promoted tweets, sponsored moments or data licensing. In theory, episodes like Gamergate or Pizzagate or the harassment of actress Leslie Jones can even be recast in socially palatable ways. “See How Twitter Shut Down Racist Harassment” can be turned into a Moment, for example, and advertisers can embed sponsored tweets in the flow.

Last December, Twitter was charging $1 million for an advertising bundle that included the sale of a Moment of an advertiser’s choice. While the top Twitter moments of 2015, which included #JeSuisParis, #BlackLivesMatter, #MarriageEquality and #RefugeesWelcome, might not themselves have contained ugly abuse, discussions of all those topics, across all platforms, unquestionably did.

On Facebook, fake news, and especially fake news posts in support of Donald Trump, came to dominate information about the election. It also contributed to an overall environment of hostility that degraded public discourse. Analysts have concluded that hyper-partisan, conservative misinformation “performed better” on Facebook, the faker the better in terms of its viral potential.

Such content didn’t just target Hillary Clinton as a candidate and a woman, but engaged an audience that expressed itself in aggressive and threatening interactions with women more broadly. Clinton supporters on Facebook responded by forming hundreds of secret and closed groups, so they could share their political beliefs without being attacked online. Strong bonds between Facebook users are a key asset to the company, because they create “stickiness,” meaning closer ties, longer time spent and more personal investment. Viral fake news and harassment become compelling gravity wells for user engagement because they confirm people's beliefs and enable them to take action by sharing or finding more of the same. This emotional resonance feeds algorithmic processes that compound the spread of negative content, because such algorithms are designed to deliver content similar to what users are already consuming.

All of which means money. Facebook turned that stickiness, last year, into $1.6 billion in profits. While the company wants to keep its users safe, it also wants to do so at the lowest possible cost. Why worry too much about harassment, or the negative effects that fake news and harassment have on civic and political participation, if people can engage in secret? Their behavior is captured either way and everyone finds a place to express themselves. It would cost social-media companies far more money and time to ensure that public discourse is civil and promotes democracy.

Troll farms, “revenge porn” and sextortion sites and fraudulent follower services all similarly trade in false news, lies and harassment -- whether about people or policy -- to monetize misinformation and abuse. Downstream, “legitimate” media outlets, fueled by the same advertising networks, inevitably become part of this sprawling advertising and data sales machine.

The effect is diffuse but visible: Media professor and data analyst Jonathan Albright recently mapped the trajectory of fake news to illustrate what he calls the “ecosystem of real-time propaganda.” In addition to showing the role of mainstream media outlets in spreading lies and misinformation, Albright's analysis revealed the role that cookies played in propagation of content. For example, Albright visited 117 sites but ended up being indirectly linked to more than 450 others via cookies, the tracking mechanisms that enable wider monetization of harmful content. Cookies are also used by advertisers to keep track of user behaviors and preferences and capture marketing data.

An additional economic factor in the profitability of fake news and harassment is the exploding use of bots, or non-human actors, that generate Internet traffic at virtually no cost and at scale. During the election, politicized bots were a major node in the spread of fake news. Bots, meaning non-human user accounts, were employed in Twitter by both Trump and Clinton supporters. Oxford University’s Project on Computational Propaganda estimates, however, that during the third presidential debate, bots sharing pro-Trump content outnumbered pro-Clinton bots by a factor of seven.

Between the first and second debates, one third of pro-Trump tweets were shared by bots, many spreading fake news and misinformation. It is estimated that more than 23 million Twitter accounts are bots feeding the firehose. The fraudulent use of bots is well understood in the advertising industry; a 2015 study by the Association of National Advertisers concluded that advertisers would lose more than $7.2 billion globally due to botnet fraud, simply a flavor of “fake news.”

This hyper-networked, content-neutral schema is perfectly optimized not just for profit from online harassment or fake news, but also for the efficient spread of state-sponsored propaganda. Leveraging a distribution system in which every one of us is, in effect, a free digital laborer, sinister actors (such as the Russian government, perhaps) have adapted tried-and-true tactics to feed what security analysts call the “firehose of falsehood.” Hiring trolls to seed key nodes in the system with lies or doctored pictures can now be done with a fraction of the care and effort that was historically necessary. Propagandists don’t need to be consistent, or to incorporate credible images or sources; lies are given strength by the sheer profusion of “sources” that become enmeshed in their free and swift distribution.

While anyone can take advantage of this business model, the fact is that fake news and harassment thrive at the nexus of America's free-market and free-speech ideologies. The United States is home to virtually every major social media platform and innovative Internet technology in the world. We have erected virtually no financial, ethical or legal incentives for either individuals or social media companies to address fake news or harassment. Instead we prioritize profit, at great cost to people and society.

Social media companies, acting globally, constantly respond to laws and regulations of countries around the world. But they are grounded in American law and regulatory frameworks. The result is a sprawling social and technical system of immense power that not only doesn’t care about truth, democracy or human dignity, but is likely to undermine them.

Just this week, Facebook announced that it recognized a responsibility to "address the issue of fake news and hoaxes." In addition to the current bans on ad -ales linkages meant to remove financial incentives to fake sites and spammers, the company will partner with the Poynter International Fact-Checking Network to create a flagging system that will score news items for users.

Facebook's vice president of product development, Adam Mosseri, explained that this arrangement will address “bottom of the barrel” websites, meaning those that produce flagrant untruths, such as fake reports of a celebrity's death. Users will also be able to report items they feel are fake. Essentially, the company is trying to stay out of the business of deciding what constitutes the truth, while inhibiting fake news sites that pose as legitimate sources. Poynter's members will not be paid for this work. Users will also be able to report items that they feel are false, which raises the risky possibility that harassers, seeking to game the system, could mass-report pages or people in an effort to get Facebook to block content. Readers of Breitbart News, for example, might try to mass-report Poynter-verified content from legitimate news publications.

What can we do about all this? A good first step for any serious effort is to understand these incentives. Fake news and harassment are both enabled by what is known, somewhat ironically, as “the Good Samaritan clause” of the 1996 Communications Decency Act. Section 230 of the Act, regarded by many as the most important law on the Internet, absolves platforms of most forms of liability related to user-generated content. It was written to make sure that the Internet could expand quickly and with minimal restrictions, and so it has. David Post, a legal scholar and fellow at the Center for Democracy and Technology, credits Section 230 with the creation of a "trillion or so dollars of value.”

This law, like it or hate it, is centrally important to the conversion of false information into financial gain, conspiracies into political disruption, and pain into widely dispersed profit. There is virtually no reason under current law for any node in this system -- a person or a corporation -- to act in a socially responsible way to change this economy. This is why barring “fake news” sites is little more than a public panacea. Given that our president-elect is himself a major vector for both fake news and online harassment, addressing the complicated array of problems we face just got even harder.


By Soraya Chemaly

MORE FROM Soraya Chemaly