7 March 2014

ARTIFICIAL INTELLIGENCE: WAR’S NEW GRAMMAR



Adam Elkus
March 6, 2014
Artificial intelligence (AI) is a hot topic in the defense community. Since the publication of P.W. Singer’s Wired for War, analysts have debated whether or not we are truly moving towards what Manuel Landa dubbed “war in the age of intelligent machines” in his 1992 book of the same name. In particular , the morality and legality of robots has attracted a lot of attention. However, much of this debate engages with a pop culture-influenced vision of what AI could be, not how it is currently used. In doing so, the discussion misses the more subtle—but equally groundbreaking—way that AI is transforming today’s defense landscape. While warfare is being revolutionized by robots, larger ethical and strategic questions loom about artificial intelligence as a whole.

AI research can be broadly divided into two traditions: Strong and Weak. Strong AI searches for ways to replicate human-like cognition in machines, and is epitomized by the symbol-based methods John Haugeland dubbed“Good Old Fashioned AI” during the 1980s. Weak AI, on the other hand, simply aims to make computers do tasks that humans can. As pioneering computer scientist Edgser Dijikstra once said, “[t]he question of whether Machines Can Think… is about as relevant as the question of whether Submarines Can Swim.”

Just as very few Navy officials worry about whether a Los Angeles class submarine can move through the water like a fish, very few AI researchers worry about whether an artificial intelligence algorithm is cognitively realistic. This will surprise fans of pop culture that, from Fritz Lang’s silent film Metropolis to the sci-fi tv series Battlestar Galactica, obsess over the philosophical dilemmas posed by machines with human qualities. Most AI researchers (and the tech companies that pay them) couldn’t care less. If you were hoping that science would gift you with your very own Number Six, then you’re in for disappointment. Why? Consider the problem of spam classification. How do we make computers better at detecting spam? We could spend a lot of time and money trying to figure out how the human brain cognitively detects, classifies, and learns from experience……or we could use a simple machine learning algorithm that we know doesn’t work the same way our brains do.

For sure, computer scientists can’t completely avoid using humans as models. Google has just invested a substantial amount of money on deep learning, which takes inspiration in large part from neuroscience. But the goal in general is to develop algorithms that do needed jobs (and make money), not make replicas of human minds. As Facebook’s Yann LeCunwrites, the goal of mainstream AI research is to allow humans to focus on the things that make us distinctively human and offload the rest to computers. And LeCun isn’t alone in his dream of using the machine toenhance, not replicate, homo sapiens. The dream of machines enhancing human potential has been with us ever since the first automated tools sprung to life.

Little is it known, for example, that the founder of Communism himself would have likely been a huge fan of artificial intelligence and robotics. Karl Marx dreamed of superproductive, super-efficient machines that would take care of society’s industrial and economic needs and thus make his classless society more than just a utopian fantasy. The invention of super-productive automations would transform the industrial world into a place that required one kind of worker – a technician who could design and operate automatons. A classless society in which everyone owned the means of robotic production could free the proletariat from the factories and still maintain the standard of living capitalism had produced. Hence, for any old-school Marxist, the writings of Alan Turing ought be just as important as those of Marx and Engels.

Still, that doesn’t answer the questions that haunts the Number Six-cravingBSG fan. OK, so most computer scientists aren’t hard at work trying to make the best sexy killer robot they can. But is it (hypothetically) possible for Turing’s ideas to give us a robotic Tricia Helfer? Never say never, especially when sexy killer robots are concerned. But advances in Strong AI that would make such a contraption possible would require solving some of the deepest questions in cognitive science and philosophy, as well as troubleshooting wee little technical problems like whole brain emulation. Mainstream research on Strong AI continues, but academic computer scientists often avoid the field due to the barriers to progress as well as the high proportion of quacks and Hollywood-influenced people that dwell within it.

When we take the full artificial intelligence picture into account, it becomes obvious that our focus on robots and war alone is myopic. From the perspective of mainstream AI tracts like Stuart Russell and Peter Norvig’s award-winning textbook, robots are just one kind of computational program called an agent. According to Russell and Norvig, an agent is program that can sense and respond to its environment. An agent makes the best decision given a performance measure and a belief about a set of alternatives. Viewed from this perspective, a robot is only needed when an agent goal requires physical interaction with the environment. Problem areas that don’t require a program that can directly manipulate the physical environment outnumber those that do. Consider Apple’s Siri. We don’t need Siri to physically exist in the world outside our iPhone screen. All we want is for Siri to return a list of burger joints when we get hungry. If you’ve given up on beating the mobile game Flappy Birds because it’s too damn hard, you could also use AI learning algorithms to train a program to play it for you.

The potential of non-robotic agents, however, extends far beyond burger-fetching iPhone programs and Flappy Birds. Army officer Anthony Cruz grasped the true potential of AI when he wrote about the need for a notional “robot general” that would take on higher-order cognitive functions in strategy and military planning. Yes, Cruz’s AI would do usual repetitive and automation-like tasks like sifting through multiple sources of raw information, producing intelligence indicators, and monitoring surveillance feeds. But Cruz argues that his military AI could do much more. An AI could provide data and make recommendations to the commander through its management, retrieval, and critical analysis of information. And while a human being can control a very limited number of unmanned aerial vehicles, an AI controller can handle the cognitive load and decisionmaking requirements a swarm attack implies.

These sorts of applications raise important dilemmas, though not necessarily the ones that Hollywood AI would lead to you to suspect. Shelve for a moment the much-debated issue of a robot that makes a kill decision. How much control we really have over our own lives when we use technologies that most of us do not understand? Yes, users ultimately pull the trigger. But do they really understand how they came to make the kill decision? Advanced technology, as Arthur C. Clarke once wrote, “is indistinguishable from magic.” If the humans that make lethal targeting choices do so on the basis of machines they don’t understand (which reason in a manner distinct from human decisonmaking) then is it man or machine that is truly “autonomous?”

Computer algorithms can also be gamed. For example, both male andfemale mathematicians figured out the algorithmic rules of the online dating world, and gamed the system to optimize their chances of finding a desired mate. As funny and inspiring as these stories are, they represent the romantic equivalent of using a cheat code to beat a video game. And they also show that lovelorn mathematicians aren’t the only ones who can game computer programs. In the video game Black Ops 2, a villain hacks America’s drones and uses them to demolish Los Angeles. But real-world methods of exploiting robotic vulnerabilities are far more subtle. Information security researcher Fiona Higgins argues that certain types of robots learn from the environment in a way that would allow an adversary to influence them. Instead of hacking the robot, merely control the environmental inputs the robot gets so it learns and adapts in the way you desire.

Additionally, machines don’t just interact with humans to do their jobs. We are entering a “second economy” composed of an ecology of interactinggroups of algorithms and agents. Game theory tells us that what’s optimal for individual humans isn’t optimal for everyone, and the same is true for machines. The new field of algorithmic game theory originated due to the need to solve collective action problems involving interacting algorithms that fouled up things for the virtual group by doing what was best for themselves. Consider the problems represented by a mission-critical DoD system underperforms due to the fact that individual subprograms (perhaps reflecting different agency or service mandates) tries to maximize its own performance measure at the expense of the overlal system.

A related problem is that while we understand the individual operations of programs very well, what they do collectively is far more unpredictable. The defense implications of this should be sobering. As multi-agent robotics expert Lynne Parker notes in her overview of multi-robotic problems, it’s well-known that things like well-accepted single-robot learning methods do not work with groups. By making the robots and their behavior homogenous, scientists can reduce the complexity of coordinating desired group behaviors. But even with similar machines in a team the coordination problem is quite complex – to say nothing of robot teams that have different behaviors and design specifications. Given that multi-agent teams of both virtual and physical agents are forecasted as the future of warfare, how do we ensure that our robotic formations produce group-level behavior we desire?

We need to start thinking about these issues now if we want to prevent future disasters. But one searches in vain for discussion of core multi-agent AI issues like the credit assignment problem in future warfare discussions concerning artificial intelligence and robotics. Such issues cannot be left solely to the computer scientists and roboticists that research and build the programs and robots that we may see on future battlefields. They also concern the commanders, strategists, and defense analysts that must evaluate a new grammar of war transformed by computation and machine intelligence.

If P.W. Singer is correct about the (AI-fused) future of conflict, military thinkers must engage with the theories of computation and artificial intelligence, not just merely familiarize themselves with the technology. The mathematical laws of computation existed long before personal computers, just as Clausewitz’s ideas hold true for our day despite their origin in the Napoleonic wars. The well-rounded strategist can recite from memory the core ideas of Clausewitz and Mahan, so why not also learn the wisdom of computer scientists like AI pioneer Alan Turing or behavioral robotics theorist Rodney Brooks?

All of this isn’t easy, particularly for defense analysts with backgrounds in the humanities and soft sciences who likely will be intimidated by the mathematical element of computer science. But if the resistance could reprogram Arnold Schwarzneggar and send him back in time to protectJohn Conner, there’s reason to hope that our civilian and military thinkers and practitioners will engage war’s new (computational) grammar.



Adam Elkus is a PhD student in Computational Social Science at George Mason University and a columnist at War on the Rocks. He has published articles on defense, international security, and technology at CTOVision, The Atlantic, the West Point Combating Terrorism Center’s Sentinel, and Foreign Policy.

No comments: