Is AI really a threat on par with nuclear war? Some experts aren't convinced

Scientists, experts call "BS" as AI industry faces backlash over "fear mongering"

By Rae Hodge

Staff Reporter

Published May 31, 2023 12:00PM (EDT)

Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. (Win McNamee/Getty Images)
Samuel Altman, CEO of OpenAI, testifies before the Senate Judiciary Subcommittee on Privacy, Technology, and the Law May 16, 2023 in Washington, DC. (Win McNamee/Getty Images)

The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of "nuclear war" and human "extinction." Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis. 

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," the statement from the non-profit Center for AI Safety said. 

But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls' escalating use of splashy language — and those moguls' hopes for an elite global AI governance board. 

TechCrunch's Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry. 

"Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now," Lomas wrote

"Instead of the statement calling for a development pause, which would risk freezing OpenAI's lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape 'democratic processes for steering AI,'" Lomas added.

Other field experts promptly shot back at the tech execs' statement. Retired nuclear scientists, AI ethicists, tenured tech writers and human extinction scholars all called the industrialists to the carpet for the use of inflammatory language. 

"This is a 'look at me' by software people. The claim that AI poses a risk of extinction of the human race is BS," retired nuclear scientist Cheryl Rofer said in a Tuesday tweet. "We have real, existing risks: global warming and nuclear weapons." 

The use of scary language and fear as a marketing tool has a long history in tech.

Émile Torres, a historian of human extinction (and Salon contributor), was quick to point out the hypocrisy of tech giants' role in manufacturing an already unethical AI development environment. 

"You'll never see these people signing a document like this about prioritizing the mitigation of harms, some profound, already being caused by AI companies like OpenAI," Torres said is a series of tweets. "No, those harms are just 'mere ripples' and 'small missteps for mankind' in the grand cosmic scheme of thing."

"A few weeks ago [Altman] was pontificating about leaving the EU market due to proposed training data transparency requirements. Do not take these statements seriously," said tech writer Robert Bateman in a tweet. 

The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times' Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts. 

"[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies," Merchant wrote. "That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they'd better climb aboard."

Government tech contracts can be just as lucrative as enterprise contracts for a burgeoning company at the head of a digital revolution — as Microsoft would know, with its financial foundations rooted in mass public-sector deployment. It's too early to speculate on whether government contracts may be a target market for a company like OpenAI, as it is for Clearview AI, the controversial facial recognition software often used by law enforcement agencies to monitor protests. But with Microsoft's latest announcement that some OpenAI features will be integrated into certain upcoming Windows systems — and the recent successes of Altman's Congressional charm offensive — lawmakers have reason to pause when considering the gravity of the tech executives' wording here.

Fear, after all, is a powerful sales tool. 


By Rae Hodge

Rae Hodge is a science reporter for Salon. Her data-driven, investigative coverage spans more than a decade, including prior roles with CNET, the AP, NPR, the BBC and others. She can be found on Mastodon at 


Related Topics ------------------------------------------

Ai Artificial Intelligence Chatgpt Deepmind Demis Hassabis Google Openai Sam Altman Tech Ethics Technology