Killer robots are coming next: The next military-industrial complex will involve real-life Terminators

War's scary future will be led by machines, but require a moral framework. Do we have the foresight or standing?

Published June 28, 2015 7:30PM (EDT)

Arnold Schwarzenegger in "Terminator Genisys"       (Paramount Pictures)
Arnold Schwarzenegger in "Terminator Genisys" (Paramount Pictures)

Excerpted from "A Dangerous Master: How to Keep Technology from Slipping Beyond Our Control"

The idea to propose a presidential order limiting the development of lethal autonomous robots (killer robots) popped into my mind as if someone had placed it there. The date was February 18, 2012, and I was waiting for a connecting flight home in the U.S. Airways departure terminal at Reagan Airport. My gaze traveled along the panoramic view across the tarmac where the Capitol Dome and the Washington Monument rose high above the tree line along the Potomac River. And suddenly, there it was, an idea I felt compelled to act upon. In the following days, I wrote up and began circulating a proposal for a presidential order, declaring that the U.S. considers autonomous weapons capable of initiating lethal force to be in violation of the Laws of Armed Conflict (LOAC).

For decades, Hollywood has supplied us with plenty of reasons to be frightened about the roboticization of warfare. But now that drones and autonomous antimissile defense systems have been deployed, and many other forms of robotic weaponry are under development, the inflection point where it must be decided whether to go down this road has arrived.

For many military planners, the answer is straightforward. Unmanned drones were particularly successful for the U.S. in killing leaders of al-Qaeda hidden in remote locations of Afghanistan and Pakistan. Some analysts believe unmanned air vehicles (UAVs) were the only game in town, the only tool the U.S. and its allies had to successfully combat guerrilla fighters. Furthermore, drones killed a good number of al-Qaeda leaders without jeopardizing the lives of soldiers. Another key advantage: reducing the loss of civilian lives through the greater precision that can be achieved with UAVs in comparison to more traditional missile attacks. The successful use of drones in warfare has been accompanied by the refrain that we must build more advanced robot weapons before “they” do.

Roboticizing aspects of warfare gained steam in the U.S. during the administrations of George W. Bush and Barack Obama. As country after country follows the U.S. military’s lead and builds their own force of UAVs, it is clear that robot fighters are here to stay. This represents a shift in the way future wars will be fought, comparable to the introduction of the cross-bow, the Gatling gun, aircraft, and nuclear weapons.

What remains undecided is whether robotic war machines will become fully autonomous. Will they pick their own targets and pull the trigger without the approval of a human? Will there be an arms race in robotic weaponry or will limits be set on the kinds of weapons that can be deployed? If limits are not set, eventually the natural escalation of robotic killing machines could easily progress to a point where robust human control over the conduct of warfare is lost.

The executive order I envisaged and proposed could be a first step in establishing an international humanitarian principle that “machines should not be making decisions about killing humans.” A vision of the president signing the executive order accompanied the idea, as if from its inception this initiative was a done deal. Like many dreamers I was wooed to action by the fantasy that this project would be realized quickly and effortlessly. Life is seldom that easy. To this day, I have no idea whether my proposal or any other campaign would lead to a ban of lethal autonomous weaponry. Nevertheless, from that first moment it was clear to me that an opportunity to pursue a ban presently exists but will disappear within a few years. The debate focuses upon whether the development of autonomous killing machines for warfare should be considered acceptable. But in a larger sense, we have begun a process of deciding whether people will retain responsibility for the actions taken by machines.

The first formal call for a ban of lethal autonomous robots had been formulated fifteen months earlier (October 2010) at a workshop in Berlin, Germany organized by the International Committee for Robot Arms Control (ICRAC). At that time, ICRAC was nothing more than a committee of four scholars (Jürgen Altmann, Peter Asaro, Noel Sharkey, and Rob Sparrow). The gathering they organized included arms control specialists, experts in  international humanitarian law (IHL),  roboticists, leaders of successful campaigns  to stop the development of nasty weapons such as cluster bombs, and most of the small community of scholars that had written about lethal autonomous robots. Colin Allen and I were invited to participate for the work we had done in co-authoring the book Moral Machines: Teaching Robots Right From Wrong.

The word autonomous in reference to robots denotes systems capable of initiating actions with little or no ongoing human involvement. The drones that were used successfully by the U.S. in Afghanistan and Pakistan are unmanned but remotely controlled by personnel often thousands of miles away. They are not what others and I are worried about. Officers at the remote location made decisions as to when a legitimate target is in sight, and then authorize missiles to be dispatched. A human is “in the loop” making decisions about who to kill or what to destroy. Increasingly, however, more and more functions are being turned over to computerized systems. For example, in 2013, the Northrop Grumman X-47B, a prototype sub-sonic aircraft with two bomb compartments and a 62-foot wingspan, autonomously took off from and landed on an aircraft carrier. The proposed ban on autonomous lethal robots is focused upon ensuring that in the future, selecting a target and pulling the “trigger” is always a decision made by a human and never delegated to a machine. There must always be a human in the loop.

Today’s computers do not have the smarts to make discriminating decisions such as who to kill or when to fire a shot or a missile. Thus, a ban is directed at future systems that have not yet been deployed, and in nearly all cases, have not yet been built. There is still time to make a course correction. Nevertheless, there already exist dumb autonomous or semi-autonomous weapons that can kill. For example, a landmine is autonomous and will go off without a human making a decision to harm the specific individual who trips the device. Unfortunately, all too often it is children who trip landmines. In addition, defensive weaponry including antiballistic missile systems such as the U.S. Navy’s Phalanx or Israel’s Iron Dome can autonomously intercept incoming missiles well before military personnel would have time to make a decision. A ban would probably not include defensive weaponry, although often the difference between declaring a weapon system defensive or offensive is merely a matter of what direction the weapon points. Nor would a ban affect autonomous  weapons viewed as the last line of defense in an area of known hostility. The Samsung Techwin, a stationary robot capable of gunning down anything that moves in the Demilitarized Zone (DMZ) separating North and South Korea, has been deployed since 2010. These stationary robots are unlikely to have even modest success should North Korea decide to send a million troops across the DMZ to invade the South.

The proposal for a presidential order that I wrote up and circulated received only a modest degree of attention and support. However, I’m certainly not alone in calling for a ban. Soon after the November 2012 presidential election, Human Rights Watch (HRW) and the Harvard Law School International Rights Clinic entered the campaign with a high-profile report that called for a ban of lethal autonomous robots (LARs). Three months later, HRW and a coalition of other nongovernmental organizations (NGOs) launched the international campaign to ban killer robots. That campaign is directed at activating worldwide support for an arms control agreement covering robotic weaponry. In addition, a growing community of international experts advocates the need for robot arms control. In a 2013 report, Christof Heyns, U.N. Special Rapporteur on extrajudicial, summary or arbitrary executions, called for a moratorium on the development of LARs as a first step toward considering an international ban.

These efforts have had significant  success in catalyzing governments around the world to give a ban serious attention. In May 2014, the U.N. Convention on Certain Conventional Weapons (CCW) convened a meeting in Geneva to discuss the dangers autonomous weapons pose. One hundred seventeen nations are party to the CCW, which restricts the use of specific weapons deemed to cause unjustifiable harm to combatants or civilians. On November 14, 2014, the CCW voted to continue deliberations over LARs, an important first step in acknowledging the importance of the issue.

Opponents of robotic weaponry contend that their use could lower the barriers to starting new wars. The potential loss of one’s own troops has been one of the few major deterrents to starting wars. Autonomous weaponry contribute to the illusion that wars can be started and quickly won with minimal costs. Once a war begins, however, not only the lives of soldiers, but also those of civilians will be lost. Furthermore, a decision about who or when to kill by an autonomous weapon could accidentally initiate hostilities. They could also be dangerous from an operational perspective if robotic weapons, for example, escalated an ongoing conflict or used force indiscriminately or disproportionally. For a military commander, the possibility of autonomous systems acting in a manner that escalates hostilities represents a loss of robust command and control.

In addition to saving the lives of one’s  soldiers, two other ostensibly strong arguments get put forward as objections to banning LARs. The first considers LARs to be a less lethal option than alternative weapons systems. Presuming  LARs are more accurate than other available weapons systems, they will cause less loss of civilian lives (less collateral damage). Shortsighted contentions such as this do not fully factor in the future dangers once many countries have robotic armies. The long-term consequences of roboticizing aspects of warfare could far outweigh the short-term benefits.

The second argument proposes that future machines will have the capacity for discrimination and will be more moral in their choices and actions than human soldiers. Ronald Arkin, a roboticist at Georgia Tech, takes this position. Arkin has been working toward the development of means for programming a robot soldier to obey the internationally established Laws of Armed Conflict (LOAC). He contends that robot soldiers will be better at following the LOAC because “the bar is so low.” In this Arkin refers to research that shows, for example, that human soldiers will not squeal on their buddies even when atrocities have been committed. Nevertheless, the prospect is low for developing robot soldiers any time soon with the ability of making an appropriate judgment in a complex situation. For example, a robot would not be good at distinguishing a combatant from a non-combatant, a task that humans also find difficult. Humans, however, bring powers of discrimination to bear for meeting the challenge; capabilities that will be difficult, if not impossible, for robots to emulate. If and when robots become ethical actors that can be held responsible for their actions, we can then begin debating whether they are no longer machines and are deserving of some form of personhood. But warfare is not the place to test speculative possibilities.

Widespread opposition to killer robots already exists according to a 2013 poll of a random sampling of a thousand Americans by Charli Carpenter, a Professor of Political Science at the University of Massachusetts. Overall, 55 percent of respondents oppose the development of autonomous weapons, with 39 percent strongly opposed. In other words, there exists substantial national and international public support for a ban. But without a continual campaign actively championed by a large segment of the public, policy makers will dismiss support for a ban as the product of fears generated by science fiction. Interestingly, 70 percent of respondents in the active military rejected fully autonomous weapons. Nevertheless, military planners have retained the option to build autonomous killing machines.

Traditional forms of arms control that depend upon inspection protocols provide a poor model for limiting the use of robotic weapons. The difference between an autonomous system and one requiring the actions of a human to proceed may be little more than a switch or a few lines of software code. Such minor modifications could be easily hidden from an inspector, or added in the days following the inspector’s departure.

Furthermore, arms control agreements can take forever to negotiate, and there’s no reason to think that negotiations to formulate procedures for the verification and inspection of robotic weaponry will be any different. Continual innovations in lethal autonomous weaponry would also require a need to periodically revise the arms agreements that had been negotiated.

Ban proponents are also up against powerful forces in the military-industrial complex who want sophisticated robotic technology funded by defense budgets. Weapons production is a lucrative business, and research funded by defense budgets can often be sold to our allies or spun off to develop non-military technologies. During the years before a ban is enacted, there will be an alignment of countries and corporations that have a vested interest in continuing the development of robotic weaponry and in defeating any attempts to limit their use. That is why an inflection point exists now, before autonomous weapons become a core weapons system around which major powers formulate their defense strategy. In the U.S. plans to reduce active personnel and increase the deployment of military technology were announced in 2014. Half of the bombers on aircraft carriers are slated to become a version of the unmanned X-47. Future models will fly in swarms and even make autonomous decisions about bombing targets. The inflection point for limiting lethal autonomous weapons that presently exists could easily disappear within a matter of a few years. The length of time the window stays open is dependent upon whether or not the campaign to ban killer robots gains enough clout to impact the willingness of governments to invest in developing the required technologies.

An international ban on LARs is needed. But given the difficulty in putting such an agreement in place, the emphasis should initially be upon placing a moral onus on the use of machines that make life-and-death decisions. A long-standing concept in just war theory and international humanitarian law designates certain activities as evil in themselves —what Roman philosophers called mala en se. Rape  and the use of biological weapons are examples of activities considered mala en se.  Machines making life-and-death decisions should also be classified as mala en se.

Machines lack discrimination, empathy, and the capacity to make the proportional judgments necessary for weighing civilian casualties against achieving military objectives. Killer robots are mala en se, not merely because they are machines, but also because their actions are unpredictable, cannot be fully controlled, and attribution of responsibility for the actions of autonomous machines is difficult if not impossible to make. Delegating life-and-death decisions to machines is immoral because machines cannot be held responsible for their actions.

A RED LINE

The Terminator is clearly science fiction, but it speaks to a powerful intuition that the robotization of warfare is a slippery slope—the end point of which can neither be predicted nor fully controlled. Machines capable of initiating lethal force and the advent of a technological singularity are future-oriented themes, but function as placeholders  for more immediate concerns. The computer industry has begun to develop systems with a degree of autonomy and capable of machine learning. Engineers do not know whether they can fully control the choices and actions of autonomous robots.

The long-term consequences of roboticizing  warfare must neither be underestimated nor treated cavalierly. But given that arms manufacturers and a few countries are focused on the valuable short-term advantages of developing killer robots, the momentum behind the robot arms race is likely to quickly spin beyond control. Setting parameters now on the activities acceptable for a lethal robot to perform offers one of the few tools available to restrain a robot arms race that mirrors that of the Cold War. One difference, however, is that, unlike nuclear weapons, robotic delivery systems will be relatively easy to build.

In the same November 2012 week in which Human Rights Watch called for a ban on killer robots, the U.S. Department of Defense (DoD) published a directive titled, “Autonomy in Weapons Systems.” The timing of the two documents’ release could have been coincidental. Nevertheless, the directive read as an effort to quell any public concern about the dangers posed by semiautonomous and autonomous weapons systems. Unlike most directives from the DoD, this one was not followed up by detailed protocols and guidelines as to how the testing and safety regimes noted in the document should be carried out. The DoD wants to expand the use of self-directed weapons, and in the directive it explicitly asks the public not to worry about autonomous robotic weaponry, stating that the DoD will put in place adequate oversight on its own. Military planners do not want their options limited by civilians who are fearful of speculative possibilities.

Neither military leaders nor anyone else wants warfare to expand beyond the bounds of human control. The directive repeats eight times that the DoD is concerned with minimizing “failures that could lead to unintended engagements or loss of control of the system.” Unfortunately, this promise overlooks two core problems. First, the DoD has a specific mission and cannot be expected to handle oversight of robot autonomy on its own. During the war against al-Qaeda in Afghanistan and Pakistan, military and political necessities caused Secretary of Defense William Gates to authorize the deployment of second-generation unmanned aircraft before they had been fully tested. In fulfilling its primary mission, the DoD is prepared to compromise other considerations. Second, even if the American public and U.S. allies trust that the DoD will establish robust command and control in deploying autonomous weaponry, there is absolutely no reason to assume that other countries and non-state actors will do the same. Other countries including North Korea, Iraq, or whoever happens to be the rogue government du jour, will almost certainly adopt robotic weapons and use them in ways that are entirely beyond our control.

No means exist to ensure that other countries, friend or foe, will institute quality engineering standards and testing procedures before they use autonomous weapons. Some country is likely to deploy crude autonomous drones or ground-based robots capable of initiating lethal force, and that will justify efforts by the U.S., NATO, and other powers to establish superiority in this class of weaponry—yet another escalation that will quicken the pace of developments leading toward sophisticated autonomous weapons.

The only viable route to slow and hopefully arrest a seemingly inexorable march toward future wars that pit one country’s  autonomous weapons against another’s is a principle or international treaty that puts the onus on any party that deploys such weapons. Instead of placing faith in the decisions made by a few military planners about the feasibility of autonomous weapons, we need an open debate within the international community as to whether prohibitions on autonomous offensive weapons are implicit under existing international humanitarian law. A prohibition on machines making life-and-death decisions must either be made explicit and/or established and codified in a new international treaty.

Short of an international ban, a higher-order principle establishing that machines should not be making decisions that are harmful to humans might suffice. Such a principle would set parameters on what is and what is not acceptable. Once that red line is drawn, diplomats and military planners can go on to the more exacting discussion as to the situations in which robotic weapons are indeed an extension of human will and intentions, and those instances when their actions are beyond direct human control. A higher-order principle is something less than an absolute ban on killer robots, but it will set limits on what can be deployed.

There are no guarantees that such a principle will always be respected. The recent use of chemical and biological weapons by the Syrian President Bashar al-Assad on his own people to quell a popular revolution provides ample demonstration that any moral prohibition can be ignored. Furthermore, robot delivery systems will be available to non-state actors who may control just one or two nuclear weapons, and feel they have little to lose. Each generation will need to work diligently to keep in place the humanitarian restraints on the way in which innovative weapons are used and future wars are fought.

My proposal for an executive order from the U.S. president was largely ignored. And yet it may come to the fore again, particularly if activity on banning lethal autonomous weapons makes headway in international forums. U.S. legislators have been poor in their willingness to endorse international treaties. A domestic initiative could still be necessary for the U.S. to join a worldwide ban. However, before any action is taken, a ban will need to become a major issue upon which presidents or presidential candidates are forced to take a stand.

Under those circumstances, President Barack Obama or his successor could sign an executive order declaring that a deliberate attack with lethal and nonlethal force by fully autonomous weaponry violates the Laws of Armed Conflict. This executive order would establish that the United States holds such a principle and is already implicit in existing international  law. A responsible human actor must always be in the loop for any offensive strike that harms a human. An executive order establishing limits on autonomous weapons will reinforce the contention that the United States places humanitarian concerns as a priority in fulfilling its defense responsibilities. NATO would soon follow suit, heightening the prospect of an international agreement that all nations will consider computers and robots to be machines that should never make life-and-death decisions.

Drawing a red line limiting the kinds of decisions that computers can make during combat reinforces the principle that humans are responsible for going to war and for harming each other. Responsibility for the death of humans should never be shirked off or dismissed as the result of error or machines making algorithmic decisions about life and death. The broader meaning of restraint on life-or-death decisions being made by military computers lies in a commitment that responsibility for the action of all machines, intelligent or dumb, resides with us and with our representatives.

Excerpted from "A Dangerous Master: How to Keep Technology From Slipping Beyond Our Control" by Wendell Wallach. Published by Basic Books. Copyright 2015 by Wendell Wallach. Reprinted with permission of the publisher. All rights reserved.


By Wendell Wallach

MORE FROM Wendell Wallach


Related Topics ------------------------------------------

Books Drones Editor's Picks Ethics Innovation Killer Robots Robots