16 January 2018

Don’t fear the robopocalypse: Autonomous weapons expert Paul Scharre

Lucien Crowder

Paul Scharre, a senior fellow at the Center for a New American Security, has pretty good credentials when it comes to autonomous weapons. If you’ve ever heard of Directive 3000.09, which established the Defense Department’s guidelines for autonomous weapons, Scharre led the working group that drafted it. And he’s got a relevant book due out in April: Army of None: Autonomous Weapons and the Future of WarFor what it’s worth, Scharre also led a Defense Department team that established guidelines on directed energy technologies. And another on intelligence, surveillance, and reconnaissance. Prior to his time in the Pentagon, he served as an Army Ranger in Afghanistan and as a civil affairs specialist in Iraq.

When Scharre agreed to a Bulletin interview about autonomous weapons, the result was an informed, passionate discussion of autonomous weapons’ utility, the likelihood that they’ll be deployed in combat, and what opponents of “killer robots” get wrong—so very, very wrong—when they argue for a ban on autonomous weapons.

LUCIEN CROWDER: First, you were deployed as an Army Ranger in Iraq and Afghanistan—correct?

PAUL SCHARRE: Yes. I did four tours overseas, three in Afghanistan and one in Iraq. I served in the Army's 3rd Ranger Battalion in Afghanistan then later deployed to Iraq as a civil affairs specialist.

CROWDER: I see. Now, did you encounter any combat situations in which access to autonomous or semi-autonomous weapons would have been desirable? And if so, could you tell me about them?

SCHARRE: This may seem like a subtle distinction, but it's one that gets often lost in the discussion and I think it's really important—I think in many situations, having some kind of remote weapons system would have been very valuable. Something that creates more distance, physical distance between a potential threat and a soldier. In particular, in insurgent and in guerrilla kinds of conflicts, there are a lot of really ambiguous situations where soldiers have to make split-second decisions. Someone's coming up on you and you're not sure if they might be a threat or not. It puts [soldiers] in a really difficult situation. They're balancing being worried about the potential harm that might come to them or their teammates if the person approaching them, say, has a suicide vest on, or is driving a car that's rigged full of explosives—versus maybe they're just an innocent civilian, just lost, or not paying attention. We encountered those kinds of situations all the time; [there was a] really bad suicide vest problem in the area [in Iraq where] I was in 2007 and 2008, during the peak of the war.

But I would say that, in my experiences in those kinds of conflicts… the level of ambiguity, and [the] context required to make these decisions, was such that it's very difficult to imagine programming a machine to make decisions [about using lethal force] with any of the [artificial intelligence] tools that we have today. Even some of the basic tools, and [the] most cutting-edge tools in the lab, when you think about trying to apply those in the real world, the current artificial intelligence is too brittle to really understand this kind of context. That is a major limitation.

CROWDER: I see. Now, when you say you might want something that creates more distance, you're basically talking about some sort of drone, right? I mean, not airborne, but...

SCHARRE: Yeah, certainly as long as you're not thinking about it in an aerial context, right? For a lot of people, they envision sort of an aircraft. In a lot of settings, what might be valuable would be some kind of ground-based system that could be used as essentially a checkpoint or a point person, or point teammate, [or a] robot teammate in the squad—so that as you're moving down an alleyway, there's sort of a perimeter, if you will, maybe a fixed or mobile perimeter of robotic systems that any potential threat encounters first. You could imagine both [kinds] around bases—having an outer perimeter that has remotely controlled robotic sentries, something like maybe the [SGR-A1] gun that the South Koreans have put on their DMZ. Then for mobile squads, I would envision something like a cloud of air and ground robotic systems that might follow the squad around and provide outer perimeter security and identify potential threats. Or they’re under human supervision, and according to their guidance, maybe respond accordingly.

CROWDER: Now, in what sorts of conflicts do you see autonomous and semi-autonomous weapons as really being most appropriate? Because it sounds like they wouldn't have been all that appropriate to what you were doing.

SCHARRE: I think probably you could envision autonomous weapons being most useful or most viable in situations where militaries are really targeting other military equipment. I think [that environment would be] be much easier from a targeting standpoint, and [an area] where [autonomous weapons would be] most militarily valuable. Advantages at speed, and operating without communications, would be most valuable.

There's something that I think often gets lost in conversations about autonomous weapons—what is the value of this autonomous weapon? Why would someone want to be building an autonomous weapon? That somehow gets missed in the discussion, I think sometimes because the concept of an autonomous weapon that's going out and targeting on its own, without human supervision, gets conflated with robotics or uninhabited vehicles as a whole.

You'll sometimes hear people in these discussions, on all sides, mix up some of the concepts that might apply to remotely controlled systems that still have humans in them [as opposed to] autonomous systems. For example, one of the concerns that I'll sometimes hear [is that] if we have autonomous weapons, then militaries will be able to fight wars without putting people at risk, and that will lead to more wars. And that may be true, but that's really about having remotely operated systems that don't have people in them. That would be an issue regardless of whether or not there are autonomous weapons.

CROWDER: Turning to a different sort of question, you led the Pentagon working group that drafted Directive 3000.09, which in 2012 established the department's guidelines on autonomous weapons systems.

SCHARRE: Yes.

CROWDER: Just about the first really substantive sentence in the directive says, "Autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force." I hear a bit of ambiguity there—as to whether commanders and operators willexercise judgment as opposed to merely being able to exercise judgment. Is that a real ambiguity, or am I over-reading?

SCHARRE: Oh, I see what you're saying. No, I don't think that's intended to be ambiguous. I think that's intended to direct guidance at different types of actors, right? There's guidance to the people who were designing and testing the weapons systems, and then guidance to those who actually use them. They're different people within the bureaucracy. The [Defense Department] directive, I think it's important to keep in mind, was not intended to be a public statement of US government policy on autonomous weapons for the purposes of … the international debate. The international debates that are going on today, which I think are really important, really weren't an issue at the time. They just hadn't come up yet.

The directive is really intended for internal audiences within the department. There's guidance written to people who would do research, people who would develop weapons, people who would test weapons, people who would use them and employ them. Those are different audiences inside [the Defense Department], those are different bureaucratic actors. There's another sentence later on—I don't have it directly in front of me—that does give guidance to the people who would use the weapons. In fact, there's a couple places where it gives guidance to people who would use them, and it gives them instructions on how they ought to conduct themselves.

CROWDER: Okay, got it. Now, do you see the deployment in combat of fully autonomous weapons as inevitable—if not necessarily by the US, then by somebody, at some point?

SCHARRE: I think that's a good question. I think the technology is certainly making it possible, right? It'll be technologically feasible. I think it depends on what you mean by it being deployed—widespread use, common throughout war, or isolated instances? I think chemical weapons [are] an interesting example today, where you have widespread revulsion internationally with chemical weapons, but they're still used in isolated cases by regimes that typically don't care.

I think it's very hard to imagine a world where you physically take the capacity out of the hands of rogue regimes to build autonomous weapons. The technology is so ubiquitous that a reasonably competent programmer could build a crude autonomous weapon in their garage. The idea of putting some kind of nonproliferation regime in place that actually keeps the underlying technology out of the hands of people—it just seems really naïve and not very realistic. I think in that kind of world, you have to anticipate that there are, at a minimum, going to be uses by terrorists and rogue regimes. I think it's more of an open question whether we cross the threshold into a world where nation-states are using them on a large scale.

And if so, I think it's worth asking, what do we man by “them”? What degree of autonomy? There are automated defensive systems that I would characterize as human-supervised autonomous weapons—where a human is on the loop and supervising its operation—in use by at least 30 countries today. They've been in use for decades and really seem to have not brought about the robopocalypse or anything. I'm not sure that those [systems] are particularly problematic. In fact, one could see them as being even more beneficial and valuable in an age when things like robot swarming and cooperative autonomy become more possible.

Are we talking about anti-personnel weapons or anti-vehicle weapons? That's been in the past an important distinction for various types of weapons, like mines. In some discussions on the topic, you'll see people make that distinction, and they seem to be predominantly concerned about, say, anti-personnel weapons—but not in all cases, right? Some concerns about the concept apply more in the case of anti-personnel weapons, [and] others don't. I think this question of the inevitability of autonomous weapons is a really contentious issue, and in part because it's used, I think, by both sides—people who are both for and against a ban—as a sort of a cop-out for an honest and serious discussion about the challenges of prohibiting and restricting weapons. People who are in favor of a ban often, I think, gloss over some of the practical realities of actually implementing mutual restraint in warfare. Having a treaty is simply not sufficient. The politics of getting to a treaty is one thing, but having a treaty is not sufficient. There are many examples in the past where nations had a treaty on the books that banned a weapon, and it was still used in war. How do you get to successful restraint?

On the other hand, I think there are people who oppose the ban who often use this argument about technological inevitability to kind of shut down discussion and say, "Well, everything's inevitable, and this technology is going to be there, and so we just need to build it first and better." I think that's not true. There are examples of successful restraint in war—but also, just because someone else is going to do something, doesn't mean it's a great idea. There are legitimate concerns about [issues] like safety and controllability—[so] even if another country did build such a weapon, it doesn't necessarily mean that the right answer is to build more of them, faster, yourself.

CROWDER: Now, I think I can guess your general answer to the next question, but I'll ask it anyway. Is it possible to imagine a future in which soldiers—from the most advanced militaries, anyway—rarely expose themselves to physical risk because they don't even travel to combat zones?

SCHARRE: No, I don't think that's possible. I think that a lot of the discussions—these sorts of visions where people extrapolate from drones today to this concept of bloodless wars, where robots fight other [robots]—are [unrealistic], for a number of reasons. One is that, at a technical level, communications—and the inability to have long-range [protection from jammers] for communications—will drive militaries to put people forward in the battle front. They may not be at the very, very edge of the battlefield, but they need to be relatively nearby to control uninhabited systems. Right now, you have this paradigm that militaries are using with drones where they're basically remotely piloted or remotely controlled. It's a very fragile paradigm because, if you jam the communications links, these drones just kind of go stupid and they don't really do anything constructive.

Obviously, there's a desire for more autonomy, but sometimes people immediately leap to the opposite end of the spectrum [to] something that's fully autonomous. [What that means is] a little bit ambiguous, kind of vague. But [people imagine a fully autonomous weapon] making all of these complex decisions on its own. That's also very hard to imagine. Sometimes people then draw conclusions like, "Well, this technology is bunk," or "It can't be used." Sometimes you'll hear people on the military side wanting to ban this [technology]. Militaries will say, "Oh, this stuff's garbage." While there are really interesting things happening in research labs, the US military, at least, has been very slow in moving forward and actually integrating robotic systems in their force. Largely because of this concern about the control paradigm and the fragile nature of these communications.

Advanced militaries can build jam-resistant communications that allow them to operate in contested environments where the communications links might be jammed by adversaries. But they're limited in range and bandwidth, so you're not going to be able to transmit large amounts of data around the globe in a contested electromagnetic environment. That's just not possible. I think that moves [us] toward thinking about human/machine teaming. People [are] forward in the battle space in various vehicles—ships, or aircraft, or tanks, or submarines—and they have robotic teammates that are relatively nearby. Maybe the robotic teammate is over the hill, or slightly forward, using its sensors to detect the enemy. The human is relatively nearby, coordinating it. The vehicle has a high degree of autonomy, but the human is still making some kinds of decisions.

The other thing [is] that you're going to find it very hard to automate lethal-force decisions to a level [you can] be comfortable with. But adaptability and flexibility in the battlefield [will be challenging as well]. There's this maxim that no plan survives contact with the enemy. Militaries plan, and do battle drills, and train so warfighters have a good concept for how to fight. But at the end of the day, militaries expect that their soldiers will be adaptive and flexible on the battlefield, and that's actually what's going to win the day. Machines are very, very poor at that. We don't know how to build machines that can do that very well at all. That's another reason why I think you'd want people nearby, to be able to have this very flexible, real-time control over systems. To not have that would be, I think, a very brutal kind of military that might be able to do things with a lot of precision, and accuracy, and speed, but could end up failing quite badly if the enemy does something creative to trick the system.

But I think there's also this broader philosophical point. When I look at robotics systems, I see them as a new step on a continuum of increasing distance in warfare, [which dates] back to the first time somebody picked up a rock and threw it at someone. At every point in time, people have moved toward weapons that give them greater distance, but people are still fighting wars against other people, and they're still killing other people. They're just now killing them with cannons, or missiles, or what I think in the future will be robotics systems. But at the end of the day, [what forces wars to end, at] the political level, will be violence inflicted on humans. That's tragic and terrible, but I think that is the reality of what war is, so I don't think that's likely to change.

CROWDER: Now, why do drones—operated from Florida, I believe it is—work today in Afghanistan despite the communications problems? Is it just because the adversaries in a place like Afghanistan are relatively unsophisticated?

SCHARRE: Yeah, because the Taliban, or ISIS in places like Iraq and Syria, don't have the capacity to jam the communications links of US drones. There were some incidents several years ago where they were intercepting the video feed, which at the time had been unencrypted, and you basically could pick [it] up with an old UHF television. But the actual command links to control the systems is a whole other matter. These types of actors don't have the ability to actually jam those. But that certainly would not be the case for another major nation-state.

CROWDER: It doesn't sound like an incredibly sophisticated capacity.

SCHARRE: No, not really.

CROWDER: Now, let's see, a lot of Bulletin readers will be familiar with arguments against autonomous weapons, such as you might hear from the Campaign to Stop Killer Robots or the Future of Life Institute. But arguments sympathetic to the development of these weapons will be less familiar. I know that you believe autonomous weapons present legal, moral, and ethical issues that society must contend with. But if societies do contend with those issues, are you prepared to make an argument that autonomous and semi-autonomous weapons not only will be developed, but should be developed?

SCHARRE: Well, I think there's a lot of tremendous benefits for [uninhabited vehicles], as I mentioned earlier. [As for] autonomy in general, in vehicles and weapons, I think there's a lot of good advantages to autonomy. The concept of fully autonomous weapons—that term gets thrown around a little bit, sometimes without a lot of definition—but I'll say here what I mean by that is a weapon system that could complete an entire targeting engagement cycle by itself. It could go and look for targets, find them, decide on its own, "Yep, this is a valid target that I'm going to engage," and then attack it all by itself without any human involvement whatsoever, or human supervision.

[People have raised] a couple objections to those [systems], right? I think one of the strongest arguments, [which] cuts across many of the legal, ethical, and strategic concerns that people raise, is that machines are unable to understand the broader context for their actions. They can identify quite accurately—really, in some cases better than humans—what an object is: "Yes, this is an AK-47 that a person is holding." [But] understanding the context for that situation—and who is this person, and why are they here? [Machines] really don't do very well at [these tasks]. Humans do quite well at those things. I think there's probably a lot of good reasons to keep humans involved in those things.

The circumstances [in which] we'd really want to fully automate those weapons would be relatively small. There are places where you'd be worried about another nation-state jamming communications links, and where the speed of interactions is pretty important. You might want to send in, say, a robotic combat aircraft into a hostile area, maybe to hunt down enemy radar systems, and you know the communications are going to be jammed, and it's important that the vehicle does not have a person on board because that allows you to make it smaller, or stealthier, or give it other performance advantages. Or maybe just take greater risk with it [in] a hostile environment. And you're hunting mobile targets, so you can't really pre-program the targets, other than basic parameters for the types of system to look for

I think there might be some isolated cases where, from a military standpoint, you could say that there's value here—right? And this gives militaries an important capability that they're going to want. I think it's worth having a discussion about the pros and cons from a risk standpoint. What's the benefit? What are the risks? How should we think about various types of harm, civilian harm? Maybe some ethical concerns about human, moral responsibility for that. But I think a wholesale abrogation of human responsibility for killing in war doesn't seem like a very good idea. In general, we should strive to keep humans involved in the lethal-force decision making process as much as is feasible. What exactly that looks like in practice, I honestly don't know.

CROWDER: Okay, good enough. In my job I read and think a lot about nuclear weapons, so I can't help thinking of weapons systems in terms of deterrence. I know that autonomous weapons aren't very analogous to nuclear weapons, but how would you assess the deterrence potential of autonomous weapons? Say, for a rogue state, or a peer competitor, or a terrorist outfit—would [autonomous weapons] serve a deterrence purpose?

SCHARRE: Well, let me propose two different models for deterrence. One is sort of a generalized concept of stronger militaries. Having a stronger military presumably makes others more deterred against aggression, right? So, in general, if you knew that this military had more advanced capabilities, whether they were robotics, or autonomous weapons, or some other sort of new, advanced military widget, others [might be] more deterred against taking hostile action against that country.

Nuclear weapons draw to mind a more specific model of deterrence, which is that you could imagine a world—and this never came into play with nuclear weapons—but people theorized about a world where you might have wars that were fought at the conventional level between countries, and nuclear weapons were held in reserve, as sort of a special weapon that was treated differently. That's a contested and ultimately untested proposition, and it's not clear whether a firebreak between conventional and nuclear weapons would have held in practice or even, frankly, whether the Soviet theorists during the Cold War even saw it that way. The Soviet war plans involved the large-scale use of nuclear weapons in Europe at the outset.

Autonomous weapons would be in the reach of all major military powers, so it ends up being a fairly equal playing field. I'm not sure that [autonomous weapons would make] a difference among them—or frankly, a material difference between them and, say, a lesser military power, at the end of the day. [But] you might imagine a world where militaries say, "We're going to hold on to these autonomous weapons and deploy them as a method of last resort, only really to combat other autonomous weapons."

The problem with that is imagining a very clear distinction between a fully autonomous and a semi-autonomous weapon, if you will. This is a problem across any type of concept or thinking about restraint—say, if a future stealth robotic aircraft was used in combat. How would you know whether it was used in a human-in-the-loop or a fully-autonomous mode? I don't know. Other than getting a hold of that particular aircraft and then inspecting its code… There's no way to tell from the outside, even if you observe the behavior of the system, unless maybe you jammed its communications link so you were confident it was jammed—maybe then you could tell. But otherwise, the effect would kind of be the same.

[Autonomous weapons are] very different than other weapons, like nuclear weapons or nuclear delivery systems, [or] missile launchers [or] submarines—things where you could look at them and count them. They're physical assets, and you could verify from a distance—even from satellites—whether someone is complying with restraint. [But] it’s hard to know how, if you don't know whether [a] weapon is being used [autonomously], you could really find an effective model of restraint at any level—whether that's a peacetime restraint, where people were afraid [of] even building them; or some kind of arms control regime; or wartime restraint, as was the case during World War II with chemical weapons, where militaries had them in reserve but didn't actually deploy them against each other.

CROWDER: Wait, I'm sorry. How did we get from deterrence to restraint? I kind of forgot the connection.

SCHARRE: I see them as tied, I guess. When you think about deterrence, I think of the general concept of, "I'm stronger, more powerful," and that may deter you. Then [there’s] the specific model of, "I'm going to hold on to my autonomous weapons and use them, as you might imagine nuclear weapons [being used], to deter you from also using autonomous weapons." [But] it's sort of hard for me to see how that would actually work in practice [for autonomous weapons]. It seems like an appealing model—militaries having these autonomous weapons and saying, "Well, I'm going to agree to use them with a human in the loop if you do. And I won't turn the switch to full-auto unless you do." But I don't know how you do that, if you can't actually verify what people are doing.

CROWDER: Right. Because the verification would only be possible in combat, if at all, and that does make it quite difficult, right?

SCHARRE: Well, it doesn't seem like it's even possible in combat, right? If this robotic weapon came up and shot at you, how would you know whether there was a human involved in the decision making or not?

CROWDER: Well… you certainly can't tell anything when it's just sitting on a tarmac.

SCHARRE: Certainly not then. Especially because the physical features that enable autonomous weapons, and to some extent digital features like sensors and software, are basically the same as something that has lots of automated processes but still a human in the loop—[just] add a full-auto mode. You can actually imagine militaries saying, genuinely saying, "We will always keep a human in the loop." And building all the weapon systems to do that. Then, midway through a war going, "You know what? Actually we want to make full-auto modes,” and actually upgrading them with a software patch. Because the technology will simply make that possible, right? The line between technologically having a human in the loop, and not, could be just a few lines of code.

CROWDER: Got it. Let me go into my final question here, about Russia. There are diplomatic discussions going on about autonomous weapons through a UN body, but not all that much has been accomplished so far—as I understand it, Russia's opposition to regulation being a primary reason. [But] I don't entirely understand why, on a strategic level, Russia has taken this approach. I know that Putin has said that whoever wins the [artificial intelligence] race will rule the world, but it seems to me that Russia wouldn't have much hope of matching or surpassing the US or, say, China in autonomous weapons technology. Why should Moscow want unrestrained development of autonomous weapons?

SCHARRE: There's a lot of things tied up in that question. Let me just pause for one second and think about how to unpack that in a sensible way.

CROWDER: Sure, sure.

SCHARRE: I guess for starters, I don't see a lot of daylight, to be honest, between the Russian position and that of the United States. Both countries have basically stated that they think that existing international humanitarian law is sufficient, and that there's no need for any new kind of regime. There may be slight differences in tactics, in terms of how they're communicating it, but at the end of the day, no major military power supports a ban. Of the 22 nations that have supported a ban, none of them are major, leading military powers or robotics developers. [A few issues are] driving that kind of position—from not just Russia, but from a number of states.

One is that the rationale for a ban has been poorly articulated in international discussions. That's just a straight-up reality, right? The people who have tried to make a case for a ban have done a bad job of it. The argument has been predominantly that autonomous weapons should be banned because they cannot comply with international law. That's a bad argument for a number of reasons. One, it's circular—if they can't comply with international law, then they're already illegal, so there's no reason to ban them. It just doesn't make any sense. It logically doesn't make any sense.

Two, [the argument] hinges on the state of the technology today. Many of the objections to autonomous weapons revolve around what the technology is capable of today, and what it's not capable of. Well, technology is moving very, very rapidly, so that's really bad ground to stake your position on. What's possible may change dramatically in 18 months. Just in the few years since international discussion about autonomous weapons began, we've seen tremendous advances in artificial intelligence and machine learning. We've moved from a world where, 10 years ago, a person would've legitimately said, "Look, machines are terrible at object recognition—how would you possibly be able to build a machine that can identify objects accurately?" Now, [machines have] actually beaten humans at standard, benchmark tests. [Admittedly,] there are huge limitations with machines' ability to do object recognition today. They don't understand context very well, and there are obviously adversarial data problems [involving] things like fooling images, [that don’t have] good solutions right now. Those could persist for a long time, or they could change overnight—right? It's hard to know. But I think that saying, "Well, based on the state of the technology today, this is no good"—it just doesn't hold water with a lot of people.

Then the third problem [with arguments in favor of banning autonomous weapons] is that [these arguments] basically revolve around civilian harm and humanitarian harm. I think this is predominantly a function of what people are used to. Folks involved in the Campaign to Stop Killer Robots—many of them have experience in prior campaigns on landmines and cluster munitions, which were driven by humanitarian harm. [But with landmines and cluster munitions], you had actual humanitarian harm that was real, that was happening. You had people that were being killed or maimed by landmines and cluster munitions. You could go into international settings and you could criticize diplomats and countries, fairly, for not taking action.

Here, that's just not the case. It's a hypothetical. Maybe these things will be damaging to civilians, maybe they won't be. But in the absence of real evidence, and in particular given the fact that automation is very good at things like precision, there are rational arguments—you may not choose to believe them, but they're at least logical—for why you might believe that autonomous weapons actually might be better at avoiding civilian harm. Certainly, precision-guided munitions have dramatically reduced civilian harm in warfare over the past several decades. When you look at, for example, civilian casualties from airstrikes with precision-guided weapons today versus, say, wide-area bombing of cities in World War II, it's a completely different kind of world that we're talking about.

Another problem is that focusing on humanitarian harm pits what is basically an incidental concern, for militaries, against a vital concern. You're basically telling militaries, "Look, we want to take away this weapon from you, which may be valuable, may be game-changing, may be a war-winning weapon, because it might cause civilian harm which isn't real today." Well, most countries care about international humanitarian law and civilian harm about as much as they want to care, right? You might look at a country like Russia and say, "Well, they don't take [international humanitarian law] very seriously," and you might look at their conduct in places like the Ukraine or elsewhere and say, "Well, they don't [show] enough concern about civilian casualties." But they care as much as they want to care, right? It's not an argument that really resonates with countries.

When you look at attempts to ban weapons, the recent history—where you have nongovernmental actors in civil society trying to pressure states to ban weapons for humanitarian reasons—historically, it's actually quite unusual. Most attempts to ban weapons, successful and unsuccessful, have been driven by great powers. There were some of them in the late 19th and early 20th century, and certainly a large number of Cold War–era agreements, both bilateral and multilateral. And all of these were driven by great powers trying to restrict weapons for a variety of reasons. Maybe because they caused civilian harm, or unnecessary suffering to combatants, or they were uncontrollable or destabilizing, or some other reason. But it was driven by states.

The dynamic in the [UN Convention on Certain Conventional Weapons] is, from the get-go, one where you basically have nongovernmental actors who are trying to take away from governments tools that they might see as potentially very valuable. Countries don't trust each other, right? So it's not necessarily that Russia or other countries say, "I really need these autonomous weapons." It's that none of them believe that, if they actually give them up, others will.

There's probably three principle barriers you need to overcome if a ban, politically, is actually going to happen. One is [that] countries need to be able to overcome this kind of distrust [of] each other, so they know that, if they were to give up a weapon, others would.

[Two], they need to believe that there's some rationale for why they're giving them up—that it's in their interest to do so, other than making nongovernmental organizations happy. [That argument] resonates with some countries, with some Western democratic countries. It's not going to resonate with a country like Russia.

[Three, there must be] a way to clearly distinguish between what's in and what's out. That's a real problem with autonomous weapons because the definitional issues are kind of fuzzy and slippery, and there's clearly a spectrum of autonomy. People have to be able to really clearly agree: This is a banned weapon and this is a legitimate one. Those are all serious hurdles that actually see zero effort by those who are trying to argue for a ban. Instead, there's things like this “Slaughterbot” video—and I recently wrote a critique of this—which is not a serious attempt to engage in dialogue on the topic, or respond to people's concerns about the feasibility of a ban. It's just a way to try to drum up public fear.

I think until people who want a ban really focus, and try to convince countries why it would be in their interest to support a ban, it's hard for me to see that countries are actually going to support one. Why would Russia want to support a ban? How does it benefit Russia at all? So if the concern is that these autonomous weapons might go rogue and kill a bunch of civilians, my guess is the Russians probably don't find that a very convincing argument, or they're not particularly concerned. If the argument is about accountability, I don't know that that resonates with them either, right? So what's the argument for why it would be in their interest to support a ban?

CROWDER: Well, I don't have an answer for you off the top of my head.

You've mentioned other arms control efforts that have resulted in international agreements, and you said that it was in states’ interest to pursue those agreements. But pressure [from nongovernmental organizations] certainly had something to do with it in a lot of cases. Wouldn't you say?

SCHARRE: Well, what are you thinking of?

CROWDER: Cluster munitions, landmines…

SCHARRE: I think for those cases, I would say yes, that [nongovernmental] pressure was decisive in making those happen. [But] I think those are actually sort of unusual exceptions when you look at the broad pattern of weapons being banned. Right now, what you have is [nongovernmental organizations] sort of using that playbook, coming from landmines and cluster munitions. I think it's a strategy that's unlikely to work in this case.

No comments: