Interview: Franklin Foer's "World Without Mind" describes how algorithms erode free will

Franklin Foer discusses how Silicon Valley is infiltrating our wallets, newspapers and minds

By Keith A. Spencer

Senior Editor

Published September 10, 2017 9:00AM (EDT)

"World Without Mind: The Existential Threat of Big Tech" by Franklin Foer   (Penguin Press/theatlantic.com)
"World Without Mind: The Existential Threat of Big Tech" by Franklin Foer (Penguin Press/theatlantic.com)

Just as a canary in a coal mine is first to faint at a sniff of bad air, journalists were the first to feel the diminution of the media industry as the tech industry sank its fangs into it. From 2005 to 2015, the number of newspaper editors and reporters in the United States declined by 25,090 — a 38 percent drop — according to Bureau of Labor Statistics data. The number of digital-only journalists increased by only 7,170 in the same span.

Indeed, the ubiquity of the internet precipitated an enormous shift in the way that information was disseminated and consumed, and the media industry was hardest hit. Tech behemoths like Google and Facebook essentially re-engineered the channels through which information and news flowed, making Silicon Valley, rather than publishers, the first in line to profit off of writers’ labor.

Franklin Foer, correspondent for the Atlantic and a fellow at the New America Foundation, was well-situated to observe the changes afoot in the publishing and journalism world. Foer served for many years as the editor of the venerable political magazine New Republic, one of the few surviving “little magazines” of the twentieth century, and was present when it was bought by Facebook co-founder and billionaire Chris Hughes in 2012. Hughes was an an ardent fan of the magazine, and his presence seemed to be a boon at first — “My entire career at the New Republic had been spent dreaming of such a benefactor,” Foer writes — but things soon soured. Hughes eventually pushed out Foer and literary editor Leon Wieseltier, resulting in a mass exodus: almost 50 writers and editors suddenly resigned in solidarity.

Foer’s close encounters with the techie kind gave him an inside perspective on the way that tech entrepreneurs think, act and believe — and the way that those beliefs are imparted on their properties, and ultimately, on our minds. In his new book, “World Without Mind: The Existential Threat of Big Tech,” Foer tells the story of the tech industry by examining how Silicon Valley has sought to control and disseminate knowledge. His own experience at the New Republic serves as a microcosm for larger debates raging in the Western world: Namely, how to preserve journalism in a system that seems determined to devalue it, and how to keep writing and editing as viable professions. The stakes are democracy itself — which cannot survive without a fourth estate as a check on elite power. I spoke with Foer about his new book and his perspectives on Silicon Valley’s immense power over our minds.

This interview has been edited and condensed.

At least a quarter of your book is dedicated to this whirlwind tale of what happened to the New Republic after Facebook co-founder Chris Hughes bought the magazine. For those who may not be familiar with that story, can you briefly explain what happened?

Franklin Foer: So, I had grown up with the New Republic. It was the magazine my dad read and it was a place that I came to work shortly after college. It was not easy. I had a really romantic view of it. I’d been a writer there, and I became an editor of the magazine. I left as editor of the magazine in 2010 because it felt like we were just this little thing that never had a prayer of making money, and the media environment was so difficult that the owner didn’t really want to spend the money on the thing. So we kept getting sold from one group of owners to the next. I was exhausted with that.

Then, a couple of years after I left, the magazine was up for sale again and Chris Hughes walked into the door. He was 28 years old at the time. He had been a co-founder of Facebook. He’d been on the board of Upworthy; he really seemed both devoted to the idea of long-form serious journalism and serious policy writing. Also, he seemed one of the few people who was both equipped with the tools to make journalism work in the digital age and had the resources to be able to fund it.

So for about a year and a half, we had this really great collaborative relationship. But at a certain point, he got frustrated with our inability to grow traffic large enough and our inability to construct a good business model for the New Republic. So he decided that he was going to make a big change. The first change was he brought in a guy called Guy [Vidra]. He worked at Yahoo and he was going to become the publisher and the CEO of the New Republic — which was new, and spelled trouble for me, because we never had a CEO before. So I would be reporting to him as the editor of the magazine.

As soon as I met him, I knew things were going to go south, because he had a radically different vision for what the New Republic could be. I knew it reflected Chris’ vision for what The New Republic would be. We had already started to move in this direction . . . the idea was that we needed to catapult ourselves to an entirely new stratosphere of traffic, because that’s what it took [to] generate meaningful revenue — which meant that we had to coast off of Facebook, that we had to start producing things that would just go — if not viral on Facebook, that would achieve a mass audience.

This was especially shocking to the system of the New Republic, as we’d been this small elite magazine. We’re suddenly asked to achieve something on a whole new scale.

So it was a culture shock, where we had to speed up and reconfigure our mindset. It was a big change for me because it wasn’t the way that I had been trained and it was a big change for the staff because it wasn’t [the] reason that they’d come to work at the New Republic and it wasn’t how they were trained. So it culminated in a very painful, and bad, and explosive way.

One day I heard a rumor that there was a guy walking around New York who was saying he’s going to be editor of the New Republic, and he was offering people jobs. So when word got back to me, I quit the magazine. When I quit, a bunch of other people walked out the door with me and it became… well, my book is really not about this story. This is a story that’s there in the background and I felt it was important for me to acknowledge and write about because it was a painful story to me, it made me angry and also helps [give] you a sense of why I fell into this topic. It became a small parable for journalism in a much bigger sense, which was that journalism was coming to depend much more on both Google and Facebook over the course of the last decade. My argument in the book is that is a dangerous dependence, because Facebook’s values and Google’s values end up becoming the values of media that depends on them.

I wanted to ask more about that. You write about the conflict between journalism’s values and Google or Facebook’s values. To that end, one of the things that was pretty interesting was you wrote about how Facebook has a “paternalistic” view of its users. Can you describe what you mean by that, or how would you elaborate on that?

Mark Zuckerberg is pretty open about this. He has described Facebook as being like a government. It sets policies. I think that he has generally had a vision of where he wants to lead his users. [The] idea of sharing, which is so deeply embedded in Facebook — he wants the people who use Facebook to become more “sharing” individuals. This is consistent with one of the big values in Silicon Valley, which is transparency; Silicon Valley believes in the religion of transparency. So one way in which he’s justified [Facebook’s existence] is that it causes people to be more transparent. They expose more of their lives to their friends and to their family, and they expose their views, they expose where they go to holiday…  and that this is going to make us ultimately better human beings.

I mean, Facebook is of two minds about a lot of these things. If we go to the question about fake news — which is one of the dominant discussions about Facebook right now — Facebook, in that instance, threw up its hands and said ‘there’s not a whole lot that we can do about fake news,’ because they don’t want to be in the business of having to say, ‘What’s legitimate news and what’s illegitimate news?’ because somebody’s going to end up pissed off as a result of whatever decision that they make and that would be bad for business. So they’ve had to refrain from being too active in that debate.

But in a lot of other ways, they really have tried to [push] a set of ideals … and they end up reconfiguring [the user’s] ideals in order to justify their business model.

So something like videos is a good example. Facebook’s decided recently that video is the thing that’s going to make them a lot of money. So their algorithm then elevates video. If I open my Facebook feed, it shows one video after another. A lot of it is just silly and kind of garbage that’s like, you don’t understand why it’s there at the top of your feed. That’s their decision. And they’ll probably come up with some ex post facto rationale for why video will make us better human beings.

So do you think there is a certain irony in how a site like Facebook demands its users be more and more transparent about their own lives, and share more and more — while at the same time, they’re totally opaque about what you’re seeing in your feed and why, or why you’re being recommended to “friend” this person, or like this product?

Yeah, you call it irony; I call it hypocrisy. Transparency, it’s a noble sentiment for its user. It’s a bit rich that they wouldn’t apply that same spirit to themselves. When you see that hypocrisy, you know you should be pretty skeptical.

So you’ve talked about the conflicts between what we think and believe, and what Facebook wants us to think and believe (so that it makes money). But how does social media conflict with journalism, and journalism’s ideals?

When I started off in journalism, you knew there was an audience out there and that you wanted people to read what you produced. But it also felt like you had a limited ability to shape the audience, or to acquire an audience, for what you were doing. So you didn’t really think too much about that.

But what happens in this new world is that we have access to a huge amount of data and we have these levers that we can use in order to get big audiences. If you game Facebook and Google in the right way, then you can achieve a mass audience.

So I think that audiences become much more of an obsession for journalism as a result. And not just in the broad sense... It’s more, “I’m going to write a piece and I want the biggest audience for this piece. How can I craft this piece to have the biggest audience? What’s the headline? What’s the photo? Let’s look at the data to see what the data tells us.”

I think most people try to resist [the] impulse to pander. But on the other hand, I know from my own experience that I couldn’t resist that impulse; that if you saw something that worked, your natural instinct is to repeat it and so ...  it becomes more formulaic.

Of course, formulas always existed in journalism. When I was just getting into the business, Time and Newsweek knew if they could put Jesus’ face on the cover that it would do really well on the newsstands. So every year, they would put Jesus’ face on the newsstand. There was a formula there. But I think that the idea of reverse engineering journalistic success, the idea [that] we have this data that we can exploit... there’s just more opportunity to pander, to give people what they want. I see that thing a very, very dangerous trend over the course of recent history.

I visited the Washington Post when they moved into their new office, after Jeff Bezos bought them. You look at the center of the newsroom, their screens that have their own analytic tools that are showing what’s popular and where traffic is coming from. It’s almost like an assembly line where you just see these things telling you how to work. They’re just flashing at you. I don’t know how much practically they matter, but they’re there for symbolic reasons. Even at the New Republic, after I left, they put up TV sets in the newsroom that had flashing traffic data.

In the book you use this term “gatekeepers” to describe companies like Facebook, Google and Amazon. Can you describe this concept?

[Say] an individual wants information — or they set out to consume information, or knowledge, or culture. It needs to be curated for you, because there is just so much news, there is so much culture. And so we’ve always had people who’ve curated that for us. Until recently, it was newspaper editors, it was television programmers. In some ways, what the internet did was that it disintermediated those old institutions. It meant that we, as individuals, had much greater ability to pick what we read and what we wanted. But even we if have access to this broader sea of news and entertainment, we still need somebody to help us sort through it.

So what Google, and Facebook, and to some extent Amazon do, is that they provide us with these mega-tools that help us sort through the internet. They help us filter the world. They claim that we, as individuals, have a whole lot of agency to sort the world to pick up things in ways that suit us, which is true to some extent. But they also have enormous power in this world to filter the world for us. And part of their trick is that they’re invisible gatekeepers.

When you had the newspaper, it was pretty clear that somebody was making decisions about what was important and what wasn’t important. With Google and Facebook, it’s pretty mysterious who’s making those decisions about what’s important and what’s not important, and the reasons why they’ve considered one thing more relevant to you than something else. That invisible power is immense. The fact that we’re not aware of it, I think, actually, makes it even greater, and so things can be altered in very mysterious sorts of ways. Why was it that video ended up starting to populate my newsfeed more than words?

The implications in it are immense. In news organizations, we’ve decided we all need to produce a ton of video, even if that wasn’t our expertise before. So news organizations make this big investment in video because Facebook rather mysteriously has altered its algorithms and its business model. We have no choice but to bow down — even if, ultimately, that’s changing the quality of information that we get in a massive way.

In one place in the book, you write that “algorithms are meant to erode freewill.” Can you explain what you mean by that?

What algorithms do is they create an architecture for the choices that we make. What is it that you buy next? Well, it’s either A or B. A and B are presented to you by Amazon, as opposed to this whole wide range of things. Algorithms are meant to reduce human choice.

Once upon a time, if you were going to get a loan from me, I would have had to look at your file, and I would have to make a decision about whether you’re going to get a loan. Maybe we would meet and talk about it. There would be some level of human involvement and human interaction. Now, a lot of this is determined by an algorithm. I enter your name and a few things about you into the system, and it returns an answer about whether you get a loan or not. I’m giving you pretty small-bore examples. But the point is that we are creating systems that remove human beings from the equation that are meant to efficiently deliver answers to us. Human beings are cut out from the system.

# # #

World Without Mind: The Existential Threat of Big Tech” comes out on Sept. 12 from Penguin Books.


By Keith A. Spencer

Keith A. Spencer is a social critic and author. Previously a senior editor at Salon, he writes about capitalism, science, labor and culture, and published a book on how Silicon Valley is destroying the world. Keep up with his writing on TwitterFacebook, or Substack.

MORE FROM Keith A. Spencer