Trump administration accelerates military study of artificial intelligence

But experts at the RAND Corporation warn that potential dangers loom

Published April 28, 2018 7:29PM (EDT)

 (Reuters/Tim Wimborne)
(Reuters/Tim Wimborne)

This article originally appeared on publicintegrity.org.

The Center for Public IntegrityThe Trump administration is keenly interested in using artificial intelligence to help the military perform some of its key tasks more effectively and cheaply, the Defense Department’s second most senior official told defense reporters in Washington on April 24.

Deputy Secretary of Defense Patrick Shanahan, a former Boeing aircraft executive, said artificial intelligence or AI — the use of computer systems to perform tasks that normally require human intelligence — could aid the department, for example, in making better use of the voluminous intelligence data it collects.

He also said AI could enhance military logistics, the task of supplying the right parts and gear to soldiers and maintenance crews at the right time. And it could facilitate wiser decision-making about providing health care for service members, producing future cost savings.

Already, the Pentagon is preparing to create a Center for Excellence — possibly within the next 6 months — that would pull together multiple existing military programs related to AI applications and bring added coherence and impetus to the work, he and other senior defense officials have said.

Shanahan’s remarks to the Defense Writers Group came on the same day, however, that the Rand Corporation — a longstanding Pentagon contractor — issued a public warning that the application of AI to military tasks may have worrisome downsides. Among them: the possibility that AI could heighten  the risk of nuclear war, by subtly undermining one of the key pillars of nuclear deterrence.

The report, entitled “How Might Artificial Intelligence Affect the Risk of Nuclear War,” put questions about the benefits and risks of using AI to three panels of nuclear security professionals and AI researchers, who met in May and June of 2017.

They looked at what might happen by the year 2040, and warned that by then, AI could allow a superpower — the United States, for example — to process sensor data so quickly and creatively that it could locate with high precision an enemy’s mobile intercontinental ballistic missiles, meaning those moved around on trucks or carried by submarines.

This capability would undermine nuclear deterrence, the Rand report noted, because the elusiveness of mobile missiles makes it difficult for attackers to destroy them all in a first strike, and knowing that a nuclear retaliation is possible causes countries to be wary of launching such a strike, according to deterrence theory.

But if AI enables a nation to destroy those previously elusive missiles with fast-flying conventional weapons, the country feeling this threat might become nervous enough to use those missiles early in a crisis, leading to what specialists call “inadvertent escalation.”

“Such escalation could happen because the adversary felt the need to use its weapons before being disarmed, in retaliation for an unsuccessful disarming strike, or simply because the crisis triggered accidental use,” the report states. While this scenario is most likely to play out in strategic competition between Russia, China, and the United States, it could also affect competition between regional nuclear rivals, such as India and Pakistan.

The report adds that AI might bring some good to the battlefield as well. As AI improves, RAND researchers suggest, it “might be able to play aspects or stages of military wargames or exercises at superhuman levels.” Adversarial nations thus won’t have to rely on human decision-making alone to assess whether nuclear war can be avoided in a crisis.

The report categorizes panelists’ reactions to such scenarios in two ways: those of the “complacents” and the “alarmists.”

Complacents, the report describes, foresee an AI winter — an extended period during which AI innovations fail to make significant progress. Alarmists, on the other hand, view an AI winter as unlikely, and see the kind of superintelligence that defined Arnold Schwarzenegger’s Terminator character as inevitable.

“At present, we cannot predict which — if any — of these scenarios will come to pass,” the report says, “but we need to begin considering the potential impact of AI on nuclear security before these challenges become acute.”

In other words: Don’t freak out just yet. There’s still time to think through how an AI disaster might be avoided.


By Matt Stroud

MORE FROM Matt Stroud

By R. Jeffrey Smith

MORE FROM R. Jeffrey Smith


Related Topics ------------------------------------------

Artificial Intelligence Center For Public Integrity Trump Administration U.s. Military