9 September 2016

DEFENSE SCIENCE BOARD RECOMMENDS “IMMEDIATE ACTION” TO COUNTER ENEMY ARTIFICIAL INTELLIGENCE; AI & THE FUTURE OF WAR

August 29, 2016 

In the August 25, 2016 edition of DefenseOne.com, Patrick Tucker summarized the findings of a recently completed Defense Science Board (DSB) Study on the state of Artificial Intelligence (AI) with respect to our adversaries and near-peers. Mr. Tucker begins with this opening line: “The DSB’s much anticipated “Autonomy” Study, sees the promise, and peril in the years ahead. The good news:” he writes: “autonomy, AI, and machine learning, could revolutionize the way the military spies on its enemies, defends its troops, or speeds its supplies to the front lines. The bad news: AI in commercial and academic settings is moving faster than the military can keep up. Among the most startling recommendations,” Mr. Tucker wrote,is a recommendation that the United States take “immediate action” to figure out how to defeat new — AI-enabled operations.”

“In issuing this warning,” Mr. Tucker observes, “the DSB Study harks back to military missteps in cyber and electronic warfare. While the Pentagon was busy [focused] on developing offensive weapons, techniques, and tricks to use against enemies, it ignored,” the threat and underestimated just how vulnerable its own combat systems were to these two disciplines. 

“For years, it has been clear that certain countries could, and most likely would, develop the technology and expertise to use cyber and electronic warfare against U.S. forces,” the study’s authors wrote. “Yet, most of the U.S. effort focused on developing offensive cyber capabilities, without commensurate attention to hardening U.S. systems against [these same kind of] attacks from others. Unfortunately,” the DSB Panel warned, “in both domains, that neglect has resulted in the Department of Defense (DoD) spending large sums of money today to patch systems against [such] potential attacks.”

“That cycle could repeat itself in the field of AI,” the DSB Study warned.

“To counter this [emerging] threat,” the study recommends that the Under Secretary of Defense for Intelligence, USD(I), “raise the priority of [intelligence] collection and analysis of foreign autonomous systems.” The DSB also recommends that the Under Secretary of Defense for Acquisition, Technology, and Logistics, USD(AT&L) “gather together a community of researchers to run tests and scenarios to discover “counter-autonomy technologies, surrogates, and solutions,” “In other words,” Mr. Tucker wrote, “practice fighting enemy AI systems.” The DSB added that “this community should have wide discretion in conducting research into commercial drones, software, and machine learning.”

Artificial Intelligence And The Future Of War

There are few subjects that can stir the imagination more so than AI. Almost everyday, there are articles by very bright people — who peer into the future and see AI and its use in vastly different scenarios. Notable visionaries such as billionaire Elon Musk and legendary theoretical physicist and cosmologist Dr. Stephen Hawking have been ‘pounding the table,’ and warning that “autonomous weapons will become the Kalishnikovs of tomorrow,” in reference to the Soviet AK-47 automatic rifle which has become the most popular and prolific assault rifle of all time. Others see a potential“terminator,” or something akin to the Hal 9000 in Stanley Kubrick’s film classic — “2001: A Space Odyssey,” where the computer/AI robot Hal — takes over the spacecraft and ‘kills’ one astronaut and nearly a second. 

On the other side of the argument from Musk and Hawking, are individuals like Andrew Ng, Chief Scientist at Baidu Research in Silicon Valley, and a Professor of Computer Science at Stanford University who has stated that “worrying about killer robots is like worrying about over-population on Mars.” In other words, those who are sounding the clarion call on AI’s potential threat — are over-estimating, and over-hyping the threat that AI can and will pose in the not too distant future on the ‘fields of battle.’

Somewhere in between these two views is where the truth and AI’s future probably lay; but, there is no doubt that AI is one of those domains where capability and strategic surprise are lurking. And, it is really important that the DSB gave this emerging domain and threat some serious thought.

I have questioned in this blog many times — about what the adversary and near-peers were doing in this area. Are we the world’s AI leader? If so, by how much? If not, who is in the lead? What are the trends showing? Is there any indication or evidence that Russia, China, North Korea, Iran, etc., have a Manhattan-type project underway in an attempt to leap-ahead of everyone else? How soon do we think that AI will have more than a minimal role in future combat? How could the Islamic States of the future and the darker angels of our nature use AI in ways we cannot presently envision or understand? Could AI become the IED’s of the 2030’s/2040s?

In David Kilcullen’s 2013 book, “Out Of The Mountains: The Coming Age Of The Urban Guerrilla,”he envisions combat in the coming decades — a cross between Blade Runner, and The Terminator —where ‘wars’ will take place in highly networked, tightly knitted, and highly populated urban slums of the future. And, the ‘beauty’ of AI is that unlike nuclear weapons, this domain does not require costly and/or, hard to obtain raw materials and unique/specialized parts. Rather, AI could become a poor man’s nuke, ubiquitous, and relatively cheap to mass produce. As Mr. Musk and Dr. Hawking warn, “AI weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing an [entire] ethnic group.” And of course, capable of inflicting strategic or capability surprise.

On the intelligence collection side of things, we can envision autonomous, AI collection drones/systems that activate based on target activity, go dormant when the adversary is attempting to surveil or discover such a foreign intelligence collection system, and interact with each other without human intervention. One can envision such AI collection systems changing their appearance and becoming chameleon-like to camouflage itself and avoid detection. It is indeed, a brave new world; and, I am glad that DoD is spending some time and dedicating some intellectual girth in attempting to determine what we need to do to avoid a Black Swan AI attack; and/or, inflict such an AI Black Swan attack on the adversary of the future — when it is most needed by us, and most unexpected by our opposition. V/R, RCP

No comments: