Our weird robot apocalypse: How paper clips could bring about the end of the world

If you dread a robot revolt, stop worrying about killer computers, and start worrying about ... paper clips?

Published August 17, 2014 11:00PM (EDT)

  (<a href='http://www.istockphoto.com/user_view.php?id=901241'>chictype</a> via <a href='http://www.istockphoto.com/'>iStock</a>)
(chictype via iStock)

Nick Bostrom is explaining to me how superintelligent AIs could destroy the human race by producing too many paper clips.

It's not a joke. Bostrom, the director of the Future of Humanity Institute at Oxford University, is the author of "Superintelligence: Paths, Dangers, Strategies," an exploration of the potentially dire challenges humans could face should AIs ever make the leap from Siri to Skynet. Published in July, the book was compelling enough to spur Elon Musk, the founder and CEO of Tesla, into tweeting out a somber warning:

[embedtweet id="495759307346952192"]

Via Skype call from his office in Oxford, Bostrom lays out a thought experiment that demonstrates how all our affairs could go awry.

It doesn't have to be paper clips. It could be anything. But if you give an artificial intelligence an explicit goal -- like maximizing the number of paper clips in the world -- and that artificial intelligence has gotten smart enough to the point where it is capable of inventing its own super-technologies and building its own manufacturing plants, then, well, be careful what you wish for.

"How could an AI make sure that there would be as many paper clips as possible?" asks Bostrom. "One thing it would do is make sure that humans didn't switch it off, because then there would be fewer paper clips. So it might get rid of humans right away, because they could pose a threat. Also, you would want as many resources as possible, because they could be used to make paper clips. Like, for example, the atoms in human bodies."

Then Bostrom moves on to even more unsettling scenarios. Suppose you attempted to constrain your budding AIs with goals that seem perfectly safe, like making humans smile, or be happy. What if the AI decided to achieve this goal by "taking control of the world around us, and 10 paralyzing human facial muscles in the shape of a smile?" Or decided that the best way to maximize human happiness was to stick electrodes in our pleasure centers and "get rid of all the parts of our brain that are not useful for experiencing pleasure."

"And then you end up filling the universe with these vats of brain tissue, in a maximally pleasurable state," says Bostrom.

And if you think that Keanu Reeves has a snowball's chance in hell of actually outwitting a real Matrix, well, think again.

But I don't know. To me, the kind of paper clip doom described by Bostrom doesn't seem very superintelligent at all. It seems kind of dumb. If we succeed in creating machines that actually become smarter than us -- so much smarter that they redesign themselves to become even more brilliant, setting off what Bostrom calls an "intelligence explosion" that leaves puny humans far behind -- wouldn't they be smart enough to understand what we meant, instead of taking us literally at our word? Why must doom be inevitable? Why couldn't our superintelligent spawn be able to grasp the lessons inherent in the existing corpus of human knowledge and help us prosper? Why does it always have to be Skynet that's not around the corner, instead of utopia?

The answer is, nobody knows. There's a core frustration involved in reading "Superintelligence," which is that, despite our best guesses, we have no idea what is going to happen, or when. Bostrom says true superintelligence could appear within a decade, or it could take 80 years, or maybe might never happen at all (but he isn't betting on the last scenario). We don't know what technical path will lead us to the transformative moment, or how future AIs will regard us, or themselves. We don't have the slightest inkling what their own autonomous goals will be. "Superintelligence" is filled with "mights" and "coulds" and "shoulds" and "it seems likelys" and Bostrom even goes so far as to concede, in his introduction, that "many of the points made in this book are probably wrong." There's no road map here that definitely gets us to our destination.

The only thing we do know reasonably well is that the current high-flying technology successes that seem like proof that true AI emergence is imminent -- the self-driving quarters, chess-grandmaster-conquering software programs, voice-activated smartphones -- do not actually signal that we're anywhere closer to solving the really hard AI challenges that have stumped computer scientists for decades. Bostrom concedes that the most flashy recent advances have mainly been fueled by clever brute force approaches. Throw a lot of sensors and processing power and data at a specific problem and you can achieve amazing things. But true autonomy, the ability to think for oneself, the capacity to derive actual meaning from existence? We're not there yet.

Which raises another question: If we're not sure when or even if superintelligent machines are in our immediate future, why should we be worrying about them, instead of other things like, say, the challenges of an overheated planet, which is something that scientists do have a pretty good handle on?

But for Bostrom, the existential threat to humanity posed by rampaging AIs is great enough to trump any other challenge.

"The underlying reason for writing the book is to increase the chances that when we eventually have to make this transition we do it successfully," says Bostrom. "We don't know exactly how hard the problem is -- we know that it is very difficult, but we don't know whether solving the problem of controlling superintelligent machines is just very difficult or super-duper-ultra difficult. And we also don't know how long we have to do it, because we don't know how far away AI is. So we have what may be an extremely difficult problem with an unknown time to solve it, on which quite possibly the entire future of humanity depends."

"So there is a certain sense of urgency in trying to get the best minds, or some of them, to focus on this."


By Andrew Leonard

Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

MORE FROM Andrew Leonard


Related Topics ------------------------------------------

Ai Artificial Intelligence Editor's Picks Robot Apocalypse Science-fiction The Future