Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world

How exotic and unlikely-sounding disasters could kill every last human being

Published October 5, 2014 7:00PM (EDT)

    (<a href='http://www.istockphoto.com/profile/everlite'>everlite</a> via <a href='http://www.istockphoto.com/'>iStock</a>)
(everlite via iStock)

It’s been a summer of bad news: what with increasingly bleak-seeming geopolitical turmoil and conflict in American streets, not to mention all the carbon spewing into the atmosphere faster than ever. But I’ve been talking to experts in the field of catastrophic risk, and I’m happy to report that, all seeming evidence to the contrary aside, the effort to prevent human extinction is making progress. When it comes to the survival of the human race — the long-term, species-level survival — global war and global warming may be relatively small potatoes.

“Global warming is very unlikely to produce an existential catastrophe,” Nick Bostrom, head of the Future of Humanity Institute at Oxford, told me when I met him in Boston last month. “The Stern report says it could hack away 20 percent of global GDP. That’s, like, how rich the world was 10 or 15 years ago. It would be lost in the noise if you look in the long term.”

Twenty percent sounds pretty bad, especially when you consider that by the end of the century there will be a few billion more people to share it with. But, Bostrom believes, even the misery caused by this kind of decline pales in comparison to what could be inflicted by high-tech nightmares: bioengineered pandemic, nanotechnology gone haywire, even super-intelligent AI run amok. These exotic and unlikely-sounding disasters could kill every last human being very quickly, and it is these possibilities that are finally getting some attention. In the last few years, a number of institutes have sprung up to begin to do serious research on the risks of emerging technology, some of them attached to the world’s most prestigious universities and stocked with famous experts. In addition to FHI, there is the Center for the Study of Existential Risk at Cambridge and the Future of Life Institute at MIT, along with the Lifeboat Foundation, the Foresight Institute and several others. After years of neglect, the first serious efforts to prevent techno-apocalypse may be underway.

The field has benefitted from a well-informed patron, the Estonian entrepreneur and computer programmer Jaan Tallinn, co-founder of Skype. When I Skyped with Tallinn, he told me that his concern with the future of humanity began in 2009, when a lawsuit between Skype and eBay left him temporarily sidelined. Finding himself with millions of dollars in the bank and no obligations, he spent his time reading through the web site Less Wrong, “a community blog devoted to refining the art of human rationality.”

Here he found the writings of Eliezer Yudkowsky, a self-taught computer theorist who argued that the emergence of artificial intelligence might turn out very badly for humans. Superintelligent AI might prove to be smart not in the way Einstein was smart compared to the average person, but in the way the average person is smart compared to a beetle or worm. We humans could quickly find ourselves at the AI’s mercy as it transformed our environment to fit its own goals and not ours.

“When I looked at those arguments,” says Tallinn, “I didn’t find any flaws. So I contacted him and we started a discussion.” Tallinn now partially supports Yudkowsky’s work at the Machine Intelligence Research Institute, along with at least seven other institutes in the U.S. and U.K. that study existential risk.

He and Yudkowsky were not the first to consider these problems. Scientists have been concerned with the apocalyptic consequences of advanced technology at least since the Manhattan Project, when Robert Oppenheimer ordered study LA-602 — "Ignition of the Atmosphere with Nuclear Bombs" — to calculate whether a nuclear detonation would cause an uncontrolled chain of nuclear reactions and burn up the atmosphere. (The researchers’ conclusion: it wouldn’t. Thankfully, they were right.) Oppenheimer, along with Einstein and other physicists, went on to oppose development of the hydrogen bomb, to no avail.

More recently, cosmologist Martin Rees, in his 2004 book, "Our Final Hour," argued that humanity’s chances of surviving this century are no better than 50/50. He’s now at the Centre for the Study of Existential Risk at Cambridge, which was started after a dinner with Tallinn and the philosopher Huw Price. The Future of Life Institute at MIT, the newest member of the group, took shape in conversations at astrophysicist Max Tegmark’s home in Cambridge, a neighborhood where he found no shortage of like-minded colleagues, including Nobel prize winner Frank Wilczek and geneticist George Church.

The work and the priorities of these institutes are not all the same. Rees is less sanguine than Bostrom about global warming, for example, and CSER includes preventing out-of-control climate change as one of its goals. MIRI is narrowly focused on theoretical AI research, while the more philosophical FHI pays more attention to the meta-issue of how to think logically about long-term problems (and pays the bills by helping insurance companies make better predictions). What they agree on is that scientists, policymakers and the public should take the prospect of accidental self-destruction more seriously.

Which scenario should we worry about first? The risks are hard to quantify. But it seems the immediate threat from advanced technology depends mostly on how developed that technology actually is. That puts bioengineering at the top of the list, since the technology to build deadly viruses in a lab already exists. “My worst nightmare,” said Rees, “is the lone weirdo ecofanatic proficient in biotech, who thinks it would be better for the world if there were fewer humans to despoil the biosphere.” University of Wisconsin professor Yoshihiro Kawaoka recently showed just how real this possibility is when he engineered a strain of the deadly 2009 “swine flu” virus that could evade the immune system. So far he has not described exactly how he did it, but his paper is apparently complete and ready for publication.

Nanotech — engineering on millionths-of-a-meter scale — also already exists, with applications as diverse as sunscreen and scratch-resistant coatings. But its potential application — microscopic factories of self-replicating bots with the power to make almost anything out of common materials — could be decades away, if it is ever achieved. If that happens, though (and there’s billions invested in nanotech R&D), it could theoretically pose a grave threat not just to human beings but to all life on earth. In his influential book, "Engines of Creation," nanotech pioneer (and FHI adviser) Eric Drexler outlined what’s come to be called the “gray goo” scenario, in which tough, omnivorous ‘bacteria’ could out-compete real bacteria: they could spread like blowing pollen, replicate swiftly and reduce the biosphere to dust in a matter of days.”

Mutiny by the machines, the scenario that first drew in Tallinn, is probably the least likely in the short term. It is also by far the coolest. In his recent book, "Superintelligence," Bostrom describes what a super-intelligent network might do once it decided humans were in its way. “At a pre-set time, nanofactories producing nerve gas or target-seeking mosquito-like robots might then burgeon forth simultaneously from every square meter of the globe (although more effective ways of killing could probably be devised).” Progress on true AI is still fairly slow and uncertain. (You can get a vague idea how far from HAL we are by talking to one of the “chat bots” that have won the annual Loebner Prize for human-simulation; they all sound like they’re on drugs.) But if existential-riskers seem overly concerned with the least likely doomsday scenario, it’s because this scenario has a utopian flip side.

“When we think about general intelligence,” says Luke Muehlhauser, Executive Director at MIRI, “that’s a meta-technology that gives you everything else that you want — including really radical things that are even weird to talk about, like having our consciousness survive for thousands of years. Physics doesn’t outlaw those things, it’s just that we don’t have enough intelligence and haven’t put enough work into the problem … If we can get artificial intelligence right, I think it would be the best thing that ever happened in the universe, basically.”

A surprising number of conversations with experts in human extinction end like this: with great hope. You’d think that contemplating robot extermination would make you gloomy, but it’s just the opposite. As Rees explains, “What science does is makes one aware of the marvelous potential of life ahead. And being aware of that, one is more concerned that it should not be foreclosed by screwing up during this century.” Concern over humanity’s extermination at the hands of nanobots or computers, it turns out, often conceals optimism of the kind you just don’t find in liberal arts majors. It implies a belief in a common human destiny and the transformative power of technology.

“The stakes are very large,” Bostrom told me. “There is this long-term future that could be so enormous. If our descendants colonized the universe, we could have these intergalactic civilizations with planetary-sized minds thinking and feeling things that are beyond our ken, living for billions of years. There’s an enormous amount of value that’s on the line.”

It’s all pretty hypothetical for now. Some of these institutes have not even started to do research yet; they’re still raising funds. The total budget for all of them put together is about $4 million a year; Tallinn complains that humanity spends more on lipstick than it does on making sure our species survives the century. This is selling us short: after all, we have security agencies that monitor biological weapons and nuclear proliferation, not to mention at least some global effort to prevent runaway climate change. We’re not entirely negligent. If humanity makes it safely through the next 50 or so years, we’ll probably have the IPCC to thank, along with a few clear-headed politicians. But if we make it 200 years, or 1,000, or 100,000, some of the credit may have to go to a few very rational people from the early 21st century who thought long and hard about anthropic bias and the stable self-modification problem while the rest of us were worrying about terrorists and trans fats. Those planet-sized minds touring the galaxy, if they ever come to be, will have to include a memorial to the ancient humans who took the long view.

By Aaron Labaree

MORE FROM Aaron Labaree