In late June, Mark Zuckerberg announced the new mission of Facebook: “To give people the power to build community and bring the world closer together.”
The rhetoric of the statement is carefully selected, centered on empowering people, and in so doing, ushering in world peace, or at least something like it. Tech giants across Silicon Valley are adopting similarly utopian visions, casting themselves as the purveyors of a more connected, more enlightened, more empowered future. Every year, these companies articulate their visions onstage at internationally streamed pep rallies, Apple’s WWDC and Google’s I/O being the best known.
But companies like Facebook can only “give people the power” because we first ceded it to them, in the form of our attention. After all, that is how many Silicon Valley companies thrive: Our attention, in the form of eyes and ears, provides a medium for them to advertise to us. And the more time we spend staring at them, the more money Facebook and Twitter make -- in effect, it’s in their interest that we become psychologically dependent on the self-esteem boost from being wired in all the time.
This quest for our eyeballs doesn’t mesh well with Silicon Valley’s utopian visions of world peace and people power. Earlier this year, many sounded alarm bells when a "60 Minutes" exposé revealed the creepy cottage industry of “brain-hacking,” industrial psychology techniques that tech giants use and study to make us spend as much time staring at screens as possible.
Indeed, it is Silicon Valley’s continual quest for attention that both motivates their utopian dreams, and that compromises them from the start. As a result, the tech industry often has compromised ethics when it comes to product design.
Case in point: At January’s Consumer Electronics Convention – a sort of Mecca for tech start-ups dreaming of making it big – I found myself in a suite with one of the largest kid-tech (children’s toys) developers in the world. A small flock of PR reps, engineers and executives hovered around the entryway as one development head walked my photographer and me through the mock setup. They were showing off the first voice assistant developed solely with kids in mind.
At the end of the tour, I asked if the company had researched or planned to research the effects of voice assistant usage on kids. After all, parents had been using tablets to occupy their kids for years by the time evidence of their less-than-ideal impact on children’s attention, behavior and sleep emerged.
The answer I received was gentle but firm: No, because we respect parents’ right to make decisions on behalf of their children.
This free-market logic – that says the consumer alone arbitrates the value of a product – is pervasive in Silicon Valley. What consumer, after all, is going to argue they can’t make their own decisions responsibly? But a free market only functions properly when consumers operate with full agency and access to information, and tech companies are working hard to limit both.
During a "60 Minutes" story on brain hacking, former product manager at Google Tristan Harris said, “There’s always this narrative that technology’s neutral. And it’s up to us to choose how we use it.”
The problem, according to Harris, is that “this is just not true… [Developers] want you to use it in particular ways and for long periods of time. Because that’s how they make their money.”
Harris was homing in on the fact that, increasingly, it isn’t the price tag on the platform itself that earns companies money, but the attention they control on said platform – whether it’s a voice assistant, operating system, app or website. We literally “pay” attention to ads or sponsored content in order to access websites.
But Harris went on to explain that larger platforms, using systems of rewards similar to slot machines, are working not only to monetize our attention, but also to monopolize it. And with that monopoly comes incredible power.
If Facebook, for instance, can control hours of people’s attention daily, it can not only determine the rate at which it will sell that attention to advertisers, but also decide which advertisers or content creators it will sell to. In other words, in an attention economy Facebook becomes a gatekeeper for content – one that mediates not only personalized advertising, but also news and information.
This sort of monopoly brings the expected fiscal payoff, and also the amassing of immeasurable social and cultural power.
So how does Facebook’s new mission statement fit into this attention economy?
Think of it in terms of optics. The carotid artery of Facebook, along with the other tech giants of Silicon Valley, is brand. Brand ubiquity means Facebook is the first thing people check when they take their phones out of their pockets, or when they open Chrome or Safari (brought to you by Google and Apple, respectively). It means Prime Day is treated like a real holiday. Just like Kleenex means tissues and Xerox means copy, online search has literally become synonymous with Google.
Yet all these companies are painfully aware of what a brand-gone-bad can do – or undo. The current generation of online platforms is built on the foundations of empires that rose and fell while the attention economy was still incipient. Today’s companies have maintained their centrality by consistently copying (Instagram Stories, a clone of Snapchat) or outright purchasing (YouTube) their fiercest competitors – all to maintain or expand their brand.
And perhaps as important, tech giants have made it near impossible to imagine a future without them, simply by being the most prominent public entities doing such imagining.
Facebook’s mission affixes the company in our shared future, and also injects it with a moral or at least charitable sensibility – even if it’s only in the form of “bring[ing] the world closer together”-type vagaries.
So how should we as average consumers respond?
In his award-winning essay "Stand Out of Our Light: Freedom and Persuasion in the Attention Economy," James Williams argues, “We must … move urgently to assert and defend our freedom of attention.”
To assert our freedom is to sufficiently recognize and evaluate the demands to attention all these devices and digital services represent. To defend our freedom entails two forms of action: first, by individual action – not unplugging completely, as the self-styled prophets of Facebook and Twitter encourage (before logging back on after a few months of asceticism) – but rather unplugging partially, habitually and ruthlessly.
Attention is the currency upon which tech giants are built. And the power of agency and free information is the power we cede when we turn over our attention wholly to platforms like Facebook.
But individual consumers can only do so much. The second way we must defend our freedom is through our demand for ethical practices from Silicon Valley.
Some critics believe government regulation is the only way to rein in Silicon Valley developers. The problem is, federal agencies that closely monitor the effects of product usage on consumers don’t have a good category for monitoring the effects of online platforms yet. The Food and Drug Administration (FDA) tracks medical technology. The Consumer Product Safety Commission (CPSC) focuses on physical risk to consumers. The Federal Communication Commission (FCC) focuses on content -- not platform. In other words, we don’t have a precedent for monitoring social media or other online platforms and their methods for retaining users.
Currently, there is no corollary agency that leads dedicated research into the effects of platforms like Facebook on users. There is no Surgeon General’s warning. There is no real protection for consumers from unethical practices by tech giants -- as long as those practices fall in the cracks between existing ethics standards.
While it might seem idealistic to hold out for the creation of a new government agency that monitors Facebook (especially given the current political regime), the first step toward curbing Silicon Valley’s power is simple: We must acknowledge freedom of attention as an inalienable right -- one inextricable from our freedom to pursue happiness. So long as the companies producing the hardware surrounding us and the platforms orienting social life online face no strictures, they will actively work to control how users think, slowly eroding our society’s collective free will.
With so much at stake, and with so little governmental infrastructure in place, checking tech giants’ ethics might seem like a daunting task. The U.S. government, after all, has demonstrated a consistent aversion to challenging Silicon Valley’s business and consumer-facing practices before.
But while we fight for better policy and stronger ethics-enforcing bodies, we can take one more practical step: “pay” attention to ethics in Silicon Valley. Read about Uber’s legal battles and the most recent research on social media’s effects on the brain. Demand more ethical practices from the companies we patronize. Why? The best moderators of technology ethics thus far have been tech giants themselves -- when such moderation benefits the companies’ brands.
In Silicon Valley, money talks, but attention talks louder. It’s time to reclaim our voice.