Why has Big Data so quickly become a part of industry dogma? Beyond the tremendous amounts of money being thrown at Big Data initiatives, both in research dollars and marketing efforts designed to convince enterprise clients of Big Data’s efficacy, the analytics industry plays into long-held cultural notions about the value of information. Despite Americans’ overall religiosity, our embrace of myth and superstition, our surprisingly enduring movements against evolution, vaccines, and climate change, we are a country infatuated with empiricism. “A widespread revolt against reason is as much a feature of our world as our faith in science and technology,” as Christopher Lasch said. We emphasize facts, raw data, best practices, instruction manuals, exact directions, instant replay, all the thousand types of precise knowledge. Even our love for gossip, secrets, and conspiracy theories can be seen as a desire for more privileged, inside types of information— a truer, more rarified knowledge. And when this knowledge can come to us through a machine—never mind that it’s a computer program designed by very fallible human beings—it can seem like truth of the highest order, computationally exact. Add to that a heavy dollop of consumerism (if it can be turned into a commodity, Americans are interested), and we’re ready to ride the Big Data train.
Information is comforting; merely possessing it grounds us in an otherwise unstable, confusing world. It’s a store to draw on, and we take threats to it seriously. Unlike our European brethren, we evince little tolerance for the peculiarities of genre or the full, fluid spectrum between truth and lies. We regularly kick aside cultural figures (though, rarely, politicians) who we’ve determined have misled us.
Our bromides about information—it’s power, it wants to be free, it’s a tool for liberation—also say something about our enthusiasm for it. The smartphone represents the coalescing of information into a single, personal object. Through the phone’s sheer materiality, it reminds us that data is now encoded into the air around us, ready to be called upon. We live amid an atmosphere of information. It’s numinous, spectral, but malleable. This sense of enchantment explains why every neoliberal dispatch from a remote African village must note the supposedly remarkable presence of cell phones. They too have access to information, that precious resource of postindustrial economies.
All of this is part of what I call the informational appetite. It’s our total faith in raw data, in the ability to extract empirical certainties about life’s greatest mysteries, if only one can deduce the proper connections. When the informational appetite is layered over social media, we get the messianic digital humanitarianism of Mark Zuckerberg. Connectivity becomes a human right; Facebook, we are told, can help stop terrorism and promote peace. (Check out Facebook.com/peace to see this hopeless naïveté in action.) More data disclosure leads to a more authentic self. Computerized personal assistants and ad networks and profiling algorithms are here to learn who we are through information. The pose is infantilizing: we should surrender and give them more personal data so that they can help us. At its furthest reaches, the informational appetite epitomizes the idea that we can know one another and ourselves through data alone. It becomes a metaphysic. Facebook’s Graph Search, arguably the first Big Data–like tool available to a broad, nonexpert audience, shows “how readily what we ‘like’ gets translated into who we are.”
Zuckerberg is only one exponent of what has become a folkway in the age of digital capitalism. Ever connected, perhaps fearing disconnection itself more than the fear of missing out, we live the informational appetite. We have internalized and institutionalized it by hoarding photos we’ll never organize, much less look at again; by tracking ourselves relentlessly; by feeling a peculiar anxiety whenever we find ourselves without a cell phone signal. We’ve learned to deal with information overload by denying its existence or adopting it as a sociocultural value, sprinkled with a bit of the martyrdom of the Protestant work ethic. It’s a badge of honor now to be too busy, always flooded with to-do items. It’s a problem that comes with success, which is why we’re willing to spend so much time online, engaging in, as Ian Bogost called it, hyperemployment.
There’s an inherent dissonance to all this, a dialectic that becomes part of how we enact the informational appetite. We ping-pong between binge-watching television and swearing off new media for rustic retreats. We lament our overflowing in-boxes but strive for “in-box zero”—temporary mastery over tools that usually threaten to overwhelm us. We subscribe to RSS feeds so as to see every single update from our favorite sites—or from the sites we think we need to follow in order to be well-informed members of the digital commentariat—and when Google Reader is axed, we lament its loss as if a great library were burned. We maintain cascades of tabs of must-read articles, while knowing that we’ll never be able to read them all. We face a nagging sense that there’s always something new that should be read instead of what we’re reading now, which makes it all the more important to just get through the thing in front of us. We find a quotable line to share so that we can dismiss the article from view. And when, in a moment of exhaustion, we close all the browser tabs, this gesture feels both like a small defeat and a freeing act. Soon we’re back again, turning to aggregators, mailing lists, Longreads, and the essential recommendations of curators whose brains seem somehow piped into the social-media firehose. Surrounded by an abundance of content but willing to pay for little of it, we invite into our lives unceasing advertisements and like and follow brands so that they may offer us more.
In the informational appetite, we find the corollary of digital detox and its fetishistic response to the overwhelming tide of data and stimulus. Information is power, particularly in this enervated economy, so we take on as much of it as we can handle. We succumb to what Evgeny Morozov calls “the many temptations of information consumerism.” This posture promises that anything is solvable—social and environmental and economic problems, sure, but more urgent, the problem of our (quantified) selves, since the appetite for information is ultimately a self-serving one. How can information make us richer? Smarter? Happier? Safer? How can we get better deals on gadgets and kitchenware? The informational appetite is the hunger for self-help in disguise.
Viral media thrives because it insists on both its newness and relevance—two weaknesses of the informational glutton. A third weakness: because it’s recent and seemingly unfiltered, it must be accurate. Memes have the lifespan and cultural value of fruit flies, but they’re infinite, and they satisfy our obsession with shared reference and cheap parody. We consume them with the uncaring aggression of those who blow up West Virginia Mountains to get at coal underneath. They exhaust themselves quickly, messily, when the glistening viral balloon is deflated by the revelation that the ingredients of the once-tidy story don’t add up. But no matter. There is always another to move onto, as well as someone (Wikipedia, Know Your Meme, Urban Dictionary, et al.) to catalog it.
The informational appetite is the never-ending need for more page views. It’s the irresistible compulsion to pull out your phone in the middle of a conversation to confirm some point of fact, because it’s intolerable not to know right now. It’s the smartphone as a salve for loneliness amid the crowd. It’s the “second screen” habit, in which we watch TV while playing games on our iPhone, tweeting about what we’re seeing, or looking up an actor on IMDB. It’s Google Glass and the whole idea of augmented reality, a second screen over your entire life. It’s the phenomenon of continuous partial attention, our focus split among various inputs because to concentrate on one would reduce our bandwidth, making us less knowledgeable citizens.
The informational appetite, then, is a cultural and metaphysical attitude as much as it is a business and technological ethic. But it also has three principal economic causes: the rapid decrease in the cost of data storage, the rising belief that all data is potentially useful, and the consolidation of a variety of media and communication systems into one global network, the Internet. With the ascension of Silicon Valley moguls to pop culture stardom, their philosophy has become our aspirational ideal—the key to business success, the key to self-improvement, the key to improving government and municipal services (or doing away with them entirely). There is seemingly no problem, we are told, that cannot be solved with more information and no aspect of life that cannot be digitized. As Katherine Losse noted, “To [Zuckerberg] and many of the engineers, it seemed, more data is always good, regardless of how you got it. Social graces—and privacy and psychological well-being, for that matter—are just obstacles in the way of having more information.”
The CIA’s chief technology officer isn’t immune. “We fundamentally try to collect everything and hang on to it forever,” he said at a 2013 conference sponsored by GigaOm, the technology Web site. He too preached the Big Data gospel, telling the crowd: “It is really very nearly within our grasp to be able to compute on all human generated information.” How far gone must you be to see this as beneficial?
Compared to this kind of talk, Google’s totalizing vision—“to organize the world’s information and make it universally accessible and useful”—sounds like a public service, rather than a grandiose, privacy-destroying monopoly. Google’s mission statement, along with its self-inoculating “Don’t Be Evil” slogan, has made it acceptable for other companies to speak of world-straddling ambitions. LinkedIn’s CEO describes his site thusly: “Imagine a platform that can digitally represent every opportunity in the world.” Factual wants to identify every fact in the world. Whereas once we hoped for free municipal Wi-Fi networks, now Facebook and Cisco are providing Wi-Fi in thousands of stores around the United States, a service free so long as you check into Facebook on your smartphone and allow Facebook to know whenever you’re out shopping. “Our vision is that every business in the world who have people coming in and visiting should have Facebook Wi-Fi,” said Erick Tseng, Facebook’s head of mobile products.* Given that Mark Zuckerberg has said that connectivity is a human right, does requiring patrons to log into Facebook to get free Wi-Fi impinge on their rights, or does it merely place Facebook access on the same level of humanitarianism?
All of the world’s information, every opportunity, every fact, every business on earth. Such widely shared self-regard has made it seem embarrassing to claim more modest goals for one’s business. A document sent out to members of Y Combinator, the industry’s most sought-after start-up incubator, instructed would-be founders: “If it doesn’t augment the human condition for a huge number of people in a meaningful way, it’s not worth doing.”
As long as we have the informational appetite, more data will always seem axiomatic—why wouldn’t one collect more, compute more? It’s the same absolutism found in the mantra “information wants to be free.” We won’t consider whether some types of data should be harder to find or whether the creation, preservation, and circulation of some data should be subjected to a moral calculus. Nor will we be able to ignore the data sitting in front of us; it would be irresponsible not to leverage it. If you think, as the CEO of ZestFinance does, that “all data is credit data, we just don’t know how to use it yet,” then why would you not incorporate anything—anything— you can get on consumers into your credit model? As a lender, you’re in the business of risk management, precisely the field which, in David Lyon’s view, is so well attuned to Big Data. The question of collecting and leveraging consumer data, and making decisions based on it, passes from the realm of ethics to a market imperative. It doesn’t matter if you’re forcing total transparency on a loan recipient while keeping your algorithm and data-collection practices totally hidden from view. It’s good for the bottom line. You must protect your company. That’s business.
This is how Big Data becomes a religion, and why, as long as you’re telling yourself and your customers that you’re against “evil,” you can justify building the world’s largest surveillance company. You’re doing it for the customers. The data serves them. It adds relevance to their informational lives, and what could be more important than that?
Ethics, for one thing. A sense of social responsibility. A willingness to accept your own fallibility and that, just as you can create some pretty amazing, world-spanning technological systems, these systems might produce some negative outcomes, too—outcomes that can’t be mitigated simply by collecting more data or refining search algorithms or instituting a few privacy controls.
Matt Waite, the programmer who built the mugshots site for the Tampa Bay Times, only later to stumble upon some complications, introduced some useful provocations for every software engineer. “The life of your data matters,” he wrote. “You have to ask yourself, Is it useful forever? Does it become harmful after a set time?”
Not all data is equal, nor are the architectures we create to contain them. They have built-in capacities—“affordances” is the sociologist’s term d’art—and reflect the biases of their creators. Someone who’s never been arrested, who makes $125,000 a year, and who spends his days insulated in a large, suburban corporate campus would approach programming a mugshot site much differently from someone who grew up in an inner-city ghetto and has friends behind bars. Whatever his background, Waite’s experience left him cognizant of these disparities, and led him to some important lessons. “What I want you to think about, before you write a line of code, is what does it mean to put your data on the Internet?” he said. “What could happen, good and bad? What should you do to be responsible about it?”
The problems around reputation, viral fame, micro-labor, the lapsing of journalism into a page-view- fueled horse race, the intrusiveness of digital advertising and unannounced data collection, the erosion of privacy as a societal value—nearly all would be improved, though not solved, if we found a way to stem our informational appetite, accepting that our hunger for more information has social, economic, and cultural consequences. The solutions to these challenges are familiar but no more easier to implement: regulate data brokers and pass legislation guarding against data-based discrimination; audit Internet giants’ data collecting practices and hand down heavy fines, meaningful ones, for unlawful data collection; give users more information about where their data goes and how it’s used to inform advertising; don’t give municipal tax breaks (Twitter) or special privileges at local airfields (Google) to major corporations that can afford to pay their share of taxes and fees to the cities that have provided the infrastructure that ensures their success. Encrypt everything.
Consumers need to educate themselves about these industries and think about how their data might be used to their disadvantage. But the onus shouldn’t lie there. We should be savvy enough, in this stage of late capitalism, to be skeptical of any corporate power that claims to be our friend or acting in our best interests. At the same time, the rhetoric and utility of today’s personal technology is seductive. One doesn’t want to think that a smartphone is also a surveillance device, monitoring our every movement and communication. Consumers are busy, overwhelmed, lacking the proper education, or simply unable to reckon with the pervasiveness of these systems. When your cell carrier offers you a heavily subsidized, top-of-the-line smartphone—a subsidy that comes from locking you into a two-year contract, throughout which they’ll make a bundle off of your data production—then you take it. It’s hard not to.
“If you try to keep up with this stuff in order to stop people from tracking you, you’d go out of your mind,” Joseph Turow said. “It’s very complicated and most people have a difficult time just getting their lives in order. It’s not an easy thing, and not only that, many people when they get online, they just want to do what they do and get off.”
He’s right, except that, when we now think we’re offline, we’re not. The tracking and data production continues. And so we shouldn’t be too hard on one another, particularly those less steeped in this world, when they suddenly raise fears over privacy or data collection. Saying “What did you expect?” is not the response of a wizened technology user. It’s a cynical displacement of blame.
It’s long past time for Silicon Valley to take an ethical turn. The industry’s increasing reliance on bulk data collection and targeted advertising is at odds with its rhetoric of individual liberty and innovation. Adopting other business models is both an economic and ethical imperative; perpetual surveillance of customers cannot endure, much less continue growing at such a pace.
The tech companies might have to give up something in turn, starting with their obsession with scale. Katherine Losse, the former Facebook employee, again: “The engineering ideology of Facebook itself: Scaling and growth are everything, individuals and their experiences are secondary to what is necessary to maximize the system.” Scale is its own self-sustaining project. It can’t be sated, because it doesn’t need to be. The writer and environmentalist Edward Abbey said, “Growth for the sake of growth is the ideology of the cancer cell.” The informational appetite is cancerous; it devours.
When the CEO of Factual speaks of forming a repository of all of the world’s facts, including all human beings’ “genetic information, what they ate, when and where they exercised,” he may indeed be on the path to changing the world, as he claims to be. But for whose benefit? If the world’s information is measured, collected, computed, and privatized en masse, what is being improved, beyond the bottom lines of a coterie of venture capitalists and tech industry executives? What social, economic, or political problem can be solved by this kind of computation? Will Big Data end a war? Will it convince us to convert our economy to sustainable energy sources, even if it raises gas prices?
Will it stop right-wing politicians from gutting the social safety net? Which data-driven insight will convince voters and legislators to accept gay marriage or to stop imprisoning nonviolent drug offenders? For all their talk of changing the world, technology professionals rarely get into even this basic level of specificity. Perhaps it’s because “changing the world” simply means creating a massive, rich company. These are small dreams. The dreamers “haven’t even reached the level of hypocrisy,” as the avuncular science fiction author Bruce Sterling told the assembled faithful at SXSW Interactive, the industry’s premier festival, in March 2013. “You’re stuck at the level of childish naïveté.”
Adopting a populist stance, some commentators, such as Jaron Lanier, say that to escape the tyranny of a data-driven society, we must expect to be paid for our data. We should put a price on it. Never mind that this is to give into the logic of Big Data. Rather than trying to dismantle or reform the system—one which, as Lanier acknowledges in his book Who Owns the Future?, serves the oligarchic platform owners at the expense of customers—they wish to universalize it. We should all have accounts from which we can buy and sell our data, they say. Or companies will be required to pay us market rates for using our private information. We could get personal data accounts and start entertaining bids. Perhaps we’d join Miinome, “the first member-controlled, portable human genomics marketplace,” where you can sell your genomic information and receive deals from retailers based on that same information. (Again, I think back to HSBC’s “Your DNA will be your data” ad, this time recognizing it not as an attempt at imparting a vaguely inspirational, futuristic message, but as news of a world already here.) That beats working with 23andMe, right? That company already sells your genetic profile to third parties—and that’s just in the course of the (controversial, non-FDA- compliant) testing they provide, for which they also charge you.
Tellingly, a version of this proposal for a data marketplace appears in the World Economic Forum (WEF) paper that announced data as the new oil. Who could be more taken with this idea than the technocrats of Davos? The paper, written in collaboration with Bain & Company, described a future in which “a person’s data would be equivalent to their ‘money.’ It would reside in an account where it would be controlled, managed, exchanged and accounted for just like personal banking services operate today.”
Given that practically everything we do now produces a digital record, this model would make all of human life part of one vast, automated dataveillance system. “Think of personal data as the digital record of ‘everything a person makes and does online and in the world,’ ” the WEF says. The pervasiveness of such a system will only increase with the continued development and adoption of the “Internet of things”—Internet- connected, sensor-rich devices, from clothing to appliances to security cameras to transportation infrastructure. No social or behavioral act would be immune from the long arms of neoliberal capitalism. Because everything would be tracked, everything you do would be part of some economic exchange, benefiting a powerful corporation far more than you. This isn’t emancipation through technology. It’s the subordination of life, culture, and society to the cruel demands of the market and making economic freedom unavailable to average people, because just walking down the street produces data about them.
Signs of this future have emerged. A company called Datacoup has offered people $8 per month to share their personal information, which Datacoup says it strips of identifying details before selling it on to interested companies. Through its ScreenWise program, Google has offered customers the ability to receive $5 Amazon gift cards in exchange for installing what is essentially spy software. The piece of software, a browser extension, was only available for Google’s own Chrome browser, which surely didn’t hurt its market share. Setting its sights on a full range of a family’s data, Google also offered a more expansive version of the program. Working with the market research firm GfK, Google began sending brochures presenting the opportunity to “be part of an exciting and very important new research study.” Under this program, Google would give you a new router and install monitoring software on anything with a Wi-Fi connection, even Blu-Ray players. The more you added, the more money you’d make. But it doesn’t add up to much. One recipient estimated that he could earn $945 from connecting all of his family’s devices for one year. A small sum for surrendering a year of privacy, for accepting a year of total surveillance. It is, however, more than most of us are getting now for much the same treatment.
Making consumers complicit—yet far from partners—in the data trade would only increase these inequities. We would experience data-based surveillance in every aspect of our lives (a dream for intelligence agencies). Such a scheme would also be open to manipulation or to the kind of desperate micro-labor and data accumulation exemplified by Mechanical Turk and other online labor marketplaces. You’d wonder if your friend’s frequent Facebook posts actually represented incidents from his life or whether he was simply trying to be a good, profitable data producer. If we were to lose our jobs, we might not go on welfare, if it’s even still available; we’d accept a dole in the form of more free devices and software from big corporations to harvest our data, or more acutely personal data, at very low rates. A gray market of bots and data-generating services would appear, allowing you to pay for automated data producers or poorly paid, occasionally blocked ones working, like World of Warcraft gold miners, in crowded apartments in some Chinese industrial center.
Those already better off, better educated, or technically adept would have the most time, knowledge, and resources to leverage this system. In another paper on the data trade, the WEF (this time working with the Boston Consulting Group) spoke of the opportunity for services to develop to manage users’ relations with data collectors and brokers—something on the order of financial advisors, real estate agents, and insurers. Risk management and insurance professionals should thrive in such a market, as people begin to take out reputational and data-related insurance; both fields depend as much on the perception of insecurity as the reality. Banks could financialize data and data-insurance policies, creating complex derivatives, securities, and other baroque financial instruments. The poor, and those without the ability to use the tools that’d make them good data producers, would lose out. Income inequality, already driven in part by the collapse of industrial economies and the rise of postindustrial, post-employment information economies, would increase ever more.
This situation won’t be completely remedied by more aggressive regulation, consumer protections, and eliminating tax breaks. Increasing automation, fueled by this boom in data collection and mining, may lead to systemic unemployment of a kind we’ve never seen. Those contingent workers laboring for tech companies through Elance or Mechanical Turk will soon enough be replaced by automated systems. It’s clear that, except for an elite class of managers, engineers, and executives, human labor is seen as a problem that technology can solve. In the meantime, those whose sweat this industry still relies upon find themselves submitting to exploitative conditions, whether as a Foxconn worker in Shenzhen or a Postmates courier in San Francisco. As one Uber driver complained to a reporter: “We have a real person performing a function, not a Google automatic car. We have become the functional end of the app.” It might not be long before he is traded in for a self-driving car. They don’t need breaks, they don’t worry about safety conditions or unions, they don’t complain about wages. Compared to a human being, automatic cars are perfectly efficient.
And who will employ him then? Who will be interested in someone who’s spent a few years bouncing between gray-market transportation facilitation services, distributed labor markets, and other hazy digital makework? He will have no experience, no connections, and little accrued knowledge. He will have lapsed from subsistence farming in the data fields to something worse and more desultory—a superfluous machine.
Automation, then, should ensure that power, data, and money continue to accrue in tandem. Through ubiquitous surveillance and data collection, now stretching from computers to cell phones to thermostats to cars to public spaces, a handful of large companies have successfully socialized our data production on their behalves. We need some redistribution of resources, which ultimately means a redistribution of power and authority. A universal basic income, paid for in part by taxes and fees levied on the companies making fabulous profits out of the quotidian materials of our lives, would help to reintroduce some fairness into our technologized economy. It’s an idea that’s had support from diverse corners—liberals and leftists often cite it as a pragmatic response to widespread inequality, while some conservatives and libertarians see it as an improvement over an imperfect welfare system. As the number of long-term unemployed, contingent, and gig workers increases, a universal basic income would restore some equity to the system. It would also make the supposed freedom of those TaskRabbit jobs actually mean something, for the laborer would know that even if the company cares little for his welfare or ability to make a living, someone else does and is providing the resources to make sure that economic precarity doesn’t turn into something more dire.
These kinds of policies would help us to begin to wean ourselves off of our informational appetite. They would also foment a necessary cultural transformation, one in which our collective attitude toward our technological infrastructure would shift away from blind wonder and toward a learned skepticism. I’m optimistic that we can get there. But to accomplish this, we’ll have to start supporting and promoting the few souls who are already doing something about our corrupted informational economy. Through art, code, activism, or simply a prankish disregard for convention, the rebellion against persistent surveillance, dehumanizing data collection, and a society administered by black-box algorithms has already begun.
Excerpted from "Terms of Service: Social Media and the Price of Constant Connection" by Jacob Silverman. Published by Harper, a division of HarperCollins. Copyright 2015 by Jacob Silverman. Reprinted with permission of the author. All rights reserved.