31 December 2016

FIVE GIANT LEAPS FOR ROBOTKIND: EXPANDING THE POSSIBLE IN AUTONOMOUS WEAPONS

ANDREW HILL AND GREGG THOMPSON


History teems with brutal ironies. The printed program for the November 29, 1941 Army-Navy football game included a photo of the USS Arizona with the caption, “It is significant that despite the claims of air enthusiasts no battleship has yet been sunk by bombs.” Just eight days before the Pearl Harbor attack, the destruction of several battleships by aircraft seemed impossible.

The biologist Stephen Jay Gould observed, “Impossible is usually defined by our theories, not given by nature.” The dividing line between the possible and the impossible is often contingent on our incomplete understanding of the nature of the world and the provisional assumptions we use to explain and to predict. Rarely do these assumptions align perfectly with reality. In the development of combat capabilities, we may behave as though the boundary that divides the possible from the impossible exists in nature, waiting for us to discover it and push our military power up to its very limits. But there is no such line anywhere but in our heads. The boundary is the product of our ideas about the realm of current possibilities and our limited understanding of uncountable future possibilities.

Standing at the beginning of the robotics revolution in warfare, we too frequently speak of impossibilities. In a recent speech, Secretary of Defense Ashton Carter said:

I’ll repeat yet again, since it keeps coming up, that when it comes to using autonomy in our weapons systems, we will always have a human being in decision-making about the use of force.

That is a clear assertion that full autonomy for lethal systems is not to be, at least according to the current secretary of defense, other senior defense officials, and DoD policy. It is easy to understand this position, given our current boundaries of understanding. First, technology is not yet discriminate enough to justify using lethal autonomous weapons, especially given American preferences for limiting friendly and noncombatant casualties. Second, these systems create a justifiable sense of dread. Nobody wants to hasten the robot apocalypse. However, we make an error when we use current technological limitations, our fears of killer robots, and legal arguments of jus in bello to assert the impossibility of lethal artificial intelligence as an effective military capability. This is a case of our theories constraining our imaginations, which in turn limits the development of potentially useful warfighting technologies.

We must recognize that while current policy does not change the nature of a pending reality, it may cause us to discover it later than our adversaries. U.S. leaders should support development of operationally effective lethal autonomous weapons systems now with the dual objectives of maintaining strategic capability overmatch today and participating in eventual arms control negotiations about these systems from a position of strength.

Asserting that unmanned systems will always have a human in the loop will constrain development of artificially intelligent military systems. Instead, leaders should identify key technological milestones for robotic systems to surpass human-centric military capabilities and then focus research and development on achieving those specific goals. In this essay, we identify what five “giant leaps” in capability portend for the development of fully autonomous and lethal weapons. Taken together, they provide developers inside and outside of the Department of Defense with a set of benchmarks that extend the realm of the possible.

Leap 1: The Hostage Rescue Test and Autonomous Discriminate Lethality

This test involves challenging robotic platforms to exceed optimal human performance in speed and discrimination in a series of hostage rescue scenarios. With its combination of high tactical speed, confined and confusing spaces, and sensory limitations, a hostage rescue scenario poses a significant challenge to military units. It is an equally stiff challenge to artificial intelligence, and features two major thresholds for the development autonomous systems. First, it requires precise discrimination between friend and foe. Second, the dynamics of the environment and presence of “friendly” hostages means lethal decisions occur too quickly to accommodate human oversight. The robots must be empowered to decide whether to kill without receiving permission in that moment. The standard is not perfection; even the best-trained human teams make mistakes. An effective “leap” threshold is for a robotic team to complete the task faster while making fewer mistakes than the best human teams. Doing so represents two major advances in robotics development: fine target discrimination in a degraded sensory environment and lethal action without human oversight or intervention.

Leap 2: The Paratrooper Test and Creating Order Without Communications

In the early hours before the D-Day amphibious landings, American paratroopers landed behind the defenses of Normandy. Difficult operational conditions left many troops dispersed over large areas, out of contact with each other, and unable to communicate with commanders. Under pressure to improvise under difficult circumstances, paratroopers formed ad hoc units, quickly organizing to fight and meet their mission objectives.

On a modern battlefield with ubiquitous sensors and electronic signature concealment, military units must be prepared to operate without persistent communications. In such an environment, human beings can still organize and function, and robotic teams must possess the same capability. Yet current Department of Defense Policy expressly forbids even the development of autonomous systems that can select and engage targets when communications are degraded or lost. Others have already suggested that this is a mistake, and some senior leaders have acknowledged the limits of the current approach. Effective robotic systems need to be able to organize spontaneously, communicate, and act collectively in pursuing an objective without calling back to their human commanders.

The paratrooper test involves scattering robotic platforms in a communication-deprived environment and challenging them to form operationally effective teams and to coordinate their collective behavior in the successful achievement of an operational objective.

Leap 3: The B.A. Baracus Test and Improvising Materiel Solutions

On the 1980s TV show The A-Team, the character B.A. Baracus was a gifted mechanic responsible for improvising materiel solutions to the team’s problems (usually converting a regular vehicle into an armored car). Silly as it was, B.A.’s mechanical magic captured a reality of conflict: the need to adapt existing equipment to unanticipated operational problems. In war, enemies adapt. These adaptations reveal the shortcomings of existing solutions, which in turn often require materiel adaptation and improvisation. History is full of examples, from the Roman creation of the “corvus” (crow or raven) grappling device to overcome Carthaginian superiority in naval maneuver to the U.S. Army’s “rhino” modification of the Sherman tank to burst through the Normandy hedgerows.

The B.A. Baracus test challenges a robotic team to manipulate physical resources to modify or create equipment to overcome an unanticipated operational problem. This test is crucial for fully autonomous military systems. Nano-technology and additive manufacturing suggest that such a capability is not as outlandish as it seems. Of course, such a challenge can vary in sophistication. The basic premise is that machines must be able to improvise modifications to themselves or to other machines using materials on hand to be effective as a fighting force. This capability represents a major and necessary advance in autonomous systems.

Leap 4: The Spontaneous Doctrine Test and Finding New Ways to Fight

The competitive conditions of war will require more than just adjustments in materiel or operational objectives. The introduction of mobile and intelligent autonomous machines into the operating environment demands innovation in how human-machine teams organize and fight against similarly equipped adversaries. During the Vietnam War, early American successes in massing infantry using helicopters resulted in adaptation by the North Vietnamese and Viet Cong. As Stephen Rosen observed in Winning the Next War, enemy forces became less willing to engage U.S. units in open combat, preferring traditional guerrilla tactics. These changes prompted further doctrinal innovation by the Americans.

Current approaches to the use of unmanned systems place them inside established doctrine with a corresponding organization by domains. This may not be optimal or appropriate for artificially intelligent systems. Effective robotic systems must be able to experiment rapidly and independently with different ways of fighting.

The spontaneous doctrine test involves deliberately placing a robotic system in a situation for which it is suboptimally organized or equipped for an objective, and then allowing it to explore different ways of fighting. We should expect that the unique characteristics of robotic systems will cause them to organize differently around a military problem than humans would. We must challenge autonomous systems to organize dynamically and employ capabilities based on the competitive conditions they face, spontaneously developing ways of fighting that are better-suited to the evolving conditions of future combat.

Leap 5: The Disciplined Initiative Test and Justified vs. Unjustified Disobedience

Effective autonomous systems must be able to recognize when altering or even contradicting the orders of superiors is justifiable because those orders do not support achieving a higher objective. War is fought amid extreme uncertainty and constant change. Units must preserve the ability to adjust their objectives based on changing conditions. Two different situations especially require such adjustments. The first is a positive instance: when junior commanders appropriately change objectives to achieve greater gains, or what the U.S. military terms “disciplined initiative” in command. This refers to the power of subordinate commanders to alter their objectives or even exceed the stated objectives of senior commanders when circumstances require it. This is a form of justifiable disobedience, and good senior leaders do not object to it. The second is a negative instance, when junior commanders refuse to obey an order because it is illegal, immoral, or excessively risky.

The disciplined initiative test challenges teams of robots to use disciplined initiative in both positive and negative instances, giving them orders that are inappropriate to actual battlefield conditions and allowing them to decide whether to follow orders or devise another approach. The test should be conducted without the ability to communicate with commanders, requiring subordinate systems to adjust their objectives independently based on their new understanding.

What Next? Sacrifice and Transcendence

The discussion thus far has focused on how to create benchmarks for the development of autonomous weapons with the capability of exceeding human-operated systems. Before concluding, let us ask two final questions for future consideration.

First, how do we think about the development of the desire for self-preservation in a robotic weapons system? In Star Trek II: the Wrath of Khan, Spock sacrifices himself to save the Enterprise, explaining as he dies, “the needs of the many of outweigh the needs of the few… or the one.” Human beings (or Vulcans) have a strong sense of self-preservation, but they are also capable of overcoming that instinct in seeking a higher goal, such as preserving the lives of others. A desire to survive and a willingness to sacrifice are both necessary for effective militaries. Without a sense of the value of life, a military will waste itself by taking pointless or avoidable risks. Conversely, without a willingness to sacrifice, a military will not take the risks necessary to achieve worthy objectives.

High-quality robotic systems will not be cheap. For the military to use them effectively, these systems must have a desire for self-preservation. Yet they must also be able to recognize when choosing certain destruction is the right thing to do. Effective autonomous systems must have the ability to choose between self-preservation and “the needs of the many.”

Second, how do we empower artificial intelligence to develop the potential of system-wide artificial intelligence to greatest effect? A robot can distribute its cyber “mind” across numerous platforms in the air and space, on land, and on or under the sea. Integration can be intuitive and seamless, and the artificial intelligence perceives and acts simultaneously across all of these areas. Military domains exist only because of the cognitive limitations of humans in developing an understanding of the development and uses of different military instruments. An advanced artificial intelligence does not have the constraints that require such a division of labor. For robots, domains need not exist as distinct “joint” functions in the way they do for humans. Artificial intelligence can transcend the physical domains that organize and constrain human combat development and military operations.

The future of autonomous weapons is intimidating. We cannot allow our trepidation about that future to prevent us from shaping and controlling it. If we stand aside, others will take our place, and they may create the nightmarish world that we fear.

Dr. Andrew Hill is an Associate Professor at the U.S. Army War College, and Director of the Carlisle Scholars Program. He is a regular contributor to War on the Rocks.

Col. Gregg Thompson is an instructor in the Department of Command, Leadership, and Management at the Army War College. Previously, he served as the Director for Capability Development and Integration at the Maneuver Support Center of Excellence, Fort Leonard Wood, Missouri.

No comments: