Nicholas Carr's dire warning: How technology is "making the world less interesting"

Carr released one of 2014's most buzzworthy tech books, "The Glass Cage," which looks at the effects of automation

Published December 24, 2014 5:00PM (EST)

Nicholas Carr        (W.W. Norton/Joanie Simon)
Nicholas Carr (W.W. Norton/Joanie Simon)

Technology writer Nicholas Carr's newest book, "The Glass Cage: Automation and Us," examines how the tools, innovation and technology utilized today are fundamentally changing the way we behave and think. Wary of being slotted in the "technophile vs. technophobe debate," Carr walks a delicate line between adulation and distrust of technology, examining how emerging phenomena like widespread automation are changing society, for better and worse.

Automation, no matter its pitfalls, is only going to become a bigger part of our lives. What Carr is interested in is, to use his words, "whether we can do it wisely and avoid some of the drawbacks, and make sure that we use these powerful new tools to enrich our lives rather than impoverish them."

Salon spoke to Carr about how to determine what technology is "good" versus "bad" and where we should begin to ask questions about the tricky process of automation. This interview has been edited for length and clarity.

You’ve managed to articulate the drawbacks of technology without coming off as a Luddite. Was that difficult?

It is difficult because it seems to me that people swing between extremes of enthusiasm and skepticism, and I’ve felt this in my own career and in my own writing. It’s very easy to be, on the one hand, really enthusiastic about technology and attracted to it for its amazing qualities. That can short-circuit the critical part of your mind. On the other hand, it’s also easy to fall into the trap of just being skeptical and becoming so suspicious of technology that you lose the enthusiasm for the amazing stuff that software can do.

In some ways I think [my previous book] “The Shallows,” for instance, is a darker book than “The Glass Cage.” One thing I deliberately tried to do with “The Glass Cage” is try to give voice to both my sense of enthusiasm for computer technologies but also my skepticism. I wanted to try to strike the balance that allows you to remember why you get enthusiastic about technology but also ensure that your critical faculties are working all the time.

Is there a specific sector of technology that gives you pause, or that you feel we need to be more critical of than we are?

What interested me in the new book was the deep effects that relying on computers can have on our attentiveness to what we’re doing; on our engagement with our own work, and even on our engagement with the world. What I find particularly interesting, and kind of ominous, is the fact that while these disabilities that can come from automation have been well-documented in areas like flying and driving cars – the professional side of our lives – these same phenomena are now beginning to enter our personal lives.

I think I know what you're saying. If it were not for the fact that I live in a grid system, I might not know where I was going, because I rely so much on my phone and Google.

The most insidious effect is that when you become dependent on the software, you can quickly conclude that, well, this wasn’t a skill that was important to know anyway. In addition to losing the skill, you start to pretend to yourself that that skill didn’t matter anyway – that the computer can do it for me.

I do think that’s dangerous, to just assume a computer can do things. Or that we can give [skills] away without suffering ill effects or without lessening our involvement with important things, or lessening our sense of fulfillment from learning stuff and exercising our talents.

Right. We’re constantly sold the idea of technology as a means of connection, but I think what you've shown is that we’re also disconnecting with the desire to learn about and engage with the world.

I think it’s very easy for us to always put a computer screen between us and the world. Whether that world is the natural world or the cultural world or the social world or the physical world doesn't matter.

There are a couple of reasons we’re quick to do that. One is that we have this sense of convenience and efficiency – and sometimes it’s real – that we can do things better just by pulling out a smartphone or a computer and, as people in Silicon Valley like to say, remove the friction from our lives. I think most people would admit to this after some reflection, but it certainly shows up in the psychological literature as well.

We get a sense of satisfaction from directly engaging with people and culture and the world, and in struggling with our challenges. That is where a lot of the fulfillment in our lives comes from —  from constantly coming up against friction and resistance. That can be in building friendships, it can be in exercising your navigational skills, it can be in really doing research; all of those things, I think, are very important to the richness of our lives. And, unfortunately, we seem very eager to sacrifice them, simply to get a little convenience, or the kind of thrill that comes from simply using technology.

One of the first images that came to my mind  reading your book was the movie "Wall-E" and its vision of future humans as devoted to the most unengaged version of life.

I think you see it in a lot of science fiction, anticipating this dark world. Whether it’s H.G. Wells’ “The Time Machine” or “Brave New World,” a common theme is that life becomes too easy, and we become passive people whose superficial needs are immediately gratified but who lack any kind of deep satisfaction with their lives.

The ability to critically think...

Right, because that too is a form of friction. It’s pushing back against the status quo, and that too requires a kind of engagement that we can lose when we just rush to Google everything or get a recommendation from Amazon or whatever.

Popular culture celebrates innovation very broadly, to the point where innovation has just become this buzzword that has lost some of its meaning. Is there a way to have a better, more nuanced understanding of how we should be looking at new technology?

Particularly in the last chapter of the book, I try to get to that question about what are good tools and innovation and technology, and what are bad tools and innovation and technology. I think the way that helped me think through it is when I drew a distinction between thinking about technology as a means of production and consumption. That’s how our default understanding of technology is structured. It’s pretty much about, “will this help me fulfill some need I have quicker or easier?”

That’s one way to look at technology, and the other way to think about technology is as a means of experience. It isn’t just about the end product of using the tool, it’s about whether this expands or restricts the scope of my experience or my talent or my learning. I think a lot of technology that we get excited about these days doesn’t actually open up new opportunities and perspectives and horizons to us; it kind of closes things down. We can think about a good tool being a tool that does, in some way, open the world up more fully to us, and open up new ways of understanding and ways we can interact with the world and develop our skills. That -- and this is where the image of the glass cage comes in -- [stands in contrast to] technology that might make it easier for us to get what we want, but circumscribes our action, narrows our perspective and makes the world, in a sense, less interesting to us.

That’s a very philosophical way to look at it, but to me it helps. It seems justified to get excited about tools and technologies because they’re so central to our experience, and yet it’s also right to feel threatened by them. One thing I wanted to explore -- more to explain it to myself than anything -- is when is our enthusiasm justified, and when is our skepticism justified. I think it does turn out that it really hinges on the effect of the tool or the technology on our experience of life.

Another tricky thing is the discussion of automation and jobs. 

On the jobs side, it seems pretty clear that as computers and software have advanced, computers are now able to take over a much broader range of jobs than factory machinery used to be able to. You see it in the ability of algorithms to do some legal work and medical diagnoses -- not necessarily as well as people can do them, but well enough that it becomes tempting to get rid of the people. With self-driving cars, you see that we’re at the beginning of being able to send robots out into the world to act autonomously, and that also means that they’ll be able to take over —

Amazon delivery drones!

Yeah, other sorts of jobs. So it’s pretty clear that automation is going to move into the professional workforce and into the skilled workforce. Some people jump from there to saying that all jobs are going to go away in the next couple of decades, but I don’t think so. I think computers still have fundamental limits due to the fact that they have no common sense and no understanding of the world; there are a lot of tasks that really require a human being.

It does seem to me that the big challenge is whether we’ll be able to create enough good, well-paying jobs to replace the ones that are going away and are going to continue to go away. The record isn’t very encouraging. Unlike in the Industrial Revolution, when you had these big categories of employment open up, we simply haven’t seen that with computers yet. Computer automation has been going on for a while, so I think we’re right to be worried, not about the number of jobs but about the polarization of work between well-paying and poor-paying jobs. It’s very hard, on the other hand, to predict how that’s going to play out because nobody can predict those kinds of big economic trends very well.

As it becomes possible to set software loose in the world to act or to make decisions autonomously, we are burdened with the fact that, even if we’re not good at it, we have to give software an ethical sense. You can’t create an autonomous machine without it having to make some kind of moral decisions or come up against a really hard choice. If you have a fully autonomous car and it’s driving down the road and an animal jumps out in front of it, it -- just as a human being would have to decide, either reflexively or thoughtfully -- has to say, “what do I do? Do I run over the animal? Do I swerve off the road and put the car in danger?”

Computers are going to have to have those same moral judgments, and what we know about moral judgments is that you can’t boil them down into statistical analysis or into an algorithm. Different people are going to make different decisions in those circumstances, so the question becomes, if we have to program a moral sense into computers, whose moral sense is it and who gets to make those decisions? Is it the car owner? The software programmer? The insurance company? The government? Whether it’s self-driving cars or self-flying drones or killer robots on battlefields in the army, these are hard ethical challenges that we’re up against already.

Another thing that I found interesting was the false trust we place in computers, or the idea that the computer is always right and always accurate.

One of the most problematic aspects of computer automation as it’s playing out now is that we use computers all the time but we don’t really understand the code or the algorithms. We use technology without even being able to know what the full intentions of the companies making the software are, and that opens us up to being manipulated in ways that can be subtle. Maybe in some cases it’s not that important, but in other cases I think it can be quite important, and I think we trust the software to do things for us but the people writing the software have their own motivations and intentions, and we can’t always assume their intentions and motivations are the same as ours.


By Sarah Gray

Sarah Gray is an assistant editor at Salon, focusing on innovation. Follow @sarahhhgray or email sgray@salon.com.

MORE FROM Sarah Gray


Related Topics ------------------------------------------

Amazon Automation Facebook Google Innovation Jobs Nicholas Carr Technology