Our science-fiction apocalypse: Meet the scientists trying to predict the end of the world
How exotic and unlikely-sounding disasters could kill every last human being
Topics: Apocalypse, Artificial Intelligence, Climate Change, Editor's Picks, end of the world, Gray Goo, Nanotechnology, Science, technology, Innovation News, Technology News, News
It’s been a summer of bad news: what with increasingly bleak-seeming geopolitical turmoil and conflict in American streets, not to mention all the carbon spewing into the atmosphere faster than ever. But I’ve been talking to experts in the field of catastrophic risk, and I’m happy to report that, all seeming evidence to the contrary aside, the effort to prevent human extinction is making progress. When it comes to the survival of the human race — the long-term, species-level survival — global war and global warming may be relatively small potatoes.
“Global warming is very unlikely to produce an existential catastrophe,” Nick Bostrom, head of the Future of Humanity Institute at Oxford, told me when I met him in Boston last month. “The Stern report says it could hack away 20 percent of global GDP. That’s, like, how rich the world was 10 or 15 years ago. It would be lost in the noise if you look in the long term.”
Twenty percent sounds pretty bad, especially when you consider that by the end of the century there will be a few billion more people to share it with. But, Bostrom believes, even the misery caused by this kind of decline pales in comparison to what could be inflicted by high-tech nightmares: bioengineered pandemic, nanotechnology gone haywire, even super-intelligent AI run amok. These exotic and unlikely-sounding disasters could kill every last human being very quickly, and it is these possibilities that are finally getting some attention. In the last few years, a number of institutes have sprung up to begin to do serious research on the risks of emerging technology, some of them attached to the world’s most prestigious universities and stocked with famous experts. In addition to FHI, there is the Center for the Study of Existential Risk at Cambridge and the Future of Life Institute at MIT, along with the Lifeboat Foundation, the Foresight Institute and several others. After years of neglect, the first serious efforts to prevent techno-apocalypse may be underway.
The field has benefitted from a well-informed patron, the Estonian entrepreneur and computer programmer Jaan Tallinn, co-founder of Skype. When I Skyped with Tallinn, he told me that his concern with the future of humanity began in 2009, when a lawsuit between Skype and eBay left him temporarily sidelined. Finding himself with millions of dollars in the bank and no obligations, he spent his time reading through the web site Less Wrong, “a community blog devoted to refining the art of human rationality.”
Here he found the writings of Eliezer Yudkowsky, a self-taught computer theorist who argued that the emergence of artificial intelligence might turn out very badly for humans. Superintelligent AI might prove to be smart not in the way Einstein was smart compared to the average person, but in the way the average person is smart compared to a beetle or worm. We humans could quickly find ourselves at the AI’s mercy as it transformed our environment to fit its own goals and not ours.
“When I looked at those arguments,” says Tallinn, “I didn’t find any flaws. So I contacted him and we started a discussion.” Tallinn now partially supports Yudkowsky’s work at the Machine Intelligence Research Institute, along with at least seven other institutes in the U.S. and U.K. that study existential risk.
He and Yudkowsky were not the first to consider these problems. Scientists have been concerned with the apocalyptic consequences of advanced technology at least since the Manhattan Project, when Robert Oppenheimer ordered study LA-602 — “Ignition of the Atmosphere with Nuclear Bombs” — to calculate whether a nuclear detonation would cause an uncontrolled chain of nuclear reactions and burn up the atmosphere. (The researchers’ conclusion: it wouldn’t. Thankfully, they were right.) Oppenheimer, along with Einstein and other physicists, went on to oppose development of the hydrogen bomb, to no avail.
More recently, cosmologist Martin Rees, in his 2004 book, “Our Final Hour,” argued that humanity’s chances of surviving this century are no better than 50/50. He’s now at the Centre for the Study of Existential Risk at Cambridge, which was started after a dinner with Tallinn and the philosopher Huw Price. The Future of Life Institute at MIT, the newest member of the group, took shape in conversations at astrophysicist Max Tegmark’s home in Cambridge, a neighborhood where he found no shortage of like-minded colleagues, including Nobel prize winner Frank Wilczek and geneticist George Church.
The work and the priorities of these institutes are not all the same. Rees is less sanguine than Bostrom about global warming, for example, and CSER includes preventing out-of-control climate change as one of its goals. MIRI is narrowly focused on theoretical AI research, while the more philosophical FHI pays more attention to the meta-issue of how to think logically about long-term problems (and pays the bills by helping insurance companies make better predictions). What they agree on is that scientists, policymakers and the public should take the prospect of accidental self-destruction more seriously.
Which scenario should we worry about first? The risks are hard to quantify. But it seems the immediate threat from advanced technology depends mostly on how developed that technology actually is. That puts bioengineering at the top of the list, since the technology to build deadly viruses in a lab already exists. “My worst nightmare,” said Rees, “is the lone weirdo ecofanatic proficient in biotech, who thinks it would be better for the world if there were fewer humans to despoil the biosphere.” University of Wisconsin professor Yoshihiro Kawaoka recently showed just how real this possibility is when he engineered a strain of the deadly 2009 “swine flu” virus that could evade the immune system. So far he has not described exactly how he did it, but his paper is apparently complete and ready for publication.
