Stephen Hawking freaks out about artificial intelligence

The physicist warns we could making the "worst mistake in history." But Google's self-driving cars aren't scary

Published May 2, 2014 6:55PM (EDT)

  (Reuters/Kimberly White)
(Reuters/Kimberly White)

Stephen Hawking, certified genius, is freaking out aboutour Skynet future. In an article for The Independent, the theoretical physicist and author of "A Brief History of Time" warns that the development of real artificial intelligence could be potentially our worst mistake in history.

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

There's nothing particularly new about the notion of runaway AI turning humans into a bunch of meat puppets -- it's one of the oldest and most popular tropes in science fiction. The notability here stems entirely from the fact that the warning comes from Hawking. Someone who understands the physics of black holes and the "many-worlds interpretation of quantum mechanics" needs to be taken seriously when he warns that we're all just one click away from getting plugged into The Matrix.

Right?

Sure, it could happen. But Hawking picks some bad examples as his evidence that we are accelerating towards the strong AI future. He writes "Recent landmarks such as self-driving cars, a computer winning at Jeopardy! and the digital personal assistants Siri, Google Now and Cortana are merely symptoms of an IT arms race fuelled by unprecedented investments and building on an increasingly mature theoretical foundation."

I'm not so sure about that "increasingly mature theoretical foundation." Google's self-driving cars are amazing, but they are largely a product of advances in cheap sensor technology combined with the increasing feasibility of doing real-time data-crunching. The cars aren't autonomous in a self-aware sense analogous to 2001's Hal. The same is more or less true for Siri. We aren't really all that much closer to creating real machine intelligence now than we were 20 years ago. We've just gotten much better at exploiting the brute force of fast processing power and big data-enabled pattern matching to solve problems that previously seemed intractable. These advances are impressive -- no question about it -- but not yet scary. The machines aren't thinking. They're still just doing what they're told to do.

So Stephen, take a chill pill! At this juncture, we seem more likely to destroy our civilization by overheating the planet than by breeding malevolent AIs. Instead of worry about mistakes we might make, maybe we should focus on the ones we've already made.


By Andrew Leonard

Andrew Leonard is a staff writer at Salon. On Twitter, @koxinga21.

MORE FROM Andrew Leonard


Related Topics ------------------------------------------

Ai Artificial Intelligence Skynet Stephen Hawking The Matrix