Lethal Battlefield Robots: Sci-Fi or the Future of War?

Warbots don’t exist yet, and the Campaign to Stop Killer Robots hopes to keep it that way.

We're not quite there yet.20th Cenury Fox/Entertainment Pictures/ZUMAPRESS.com

Fight disinformation: Sign up for the free Mother Jones Daily newsletter and follow the news that matters.


“We are not talking about things that will look like an army of Terminators,” Steve Goose, a spokesman for the Campaign to Stop Killer Robots, tells me. “Stealth bombers and armored vehicles—not Terminators.” Goose, the director of Human Rights Watch’s arms division, has been working with activists and other experts to demand an international ban on robotic military weapons capable of eliminating targets without the aid of human interaction or intervention, i.e., killer robots.

The bluntly titled campaign, which at sounds like something from a Michael Bay flick or Austin Powers, involves nine organizations, including the International Committee for Robot Arms Control. The campaign is spearheading a preemptive push against efforts to develop and potentially deploy fully autonomous killer robots—a form of hi-tech weaponry that doesn’t actually exist yet.

“I’m not against autonomous robots—my vacuum is an autonomous robot,” says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield and chair of the International Committee for Robot Arms Control (and a fixture on British television). “We are simply calling for a prohibition on the kill function on such robots. A robot doesn’t have moral agency, and can’t be held accountable for crimes. There’s no way to punish a robot.”

The real-life equivalent of Isaac Asimov’s Three Laws of Robotics (which posits that robots may not harm humans, even if they are instructed to do so) is, like killer-robot technology itself, a ways off. In April, the United Nations released a report (PDF) that recommended suspending the development of autonomous weapons until their function and application is discussed more thoroughly. Last December, the Department of Defense issued a directive on weapon systems autonomy, calling for the establishment of “guidelines designed to minimize the probability and consequences of failures in autonomous and semi-autonomous weapon systems that could lead to unintended engagements.”

Though the Pentagon document stresses the need for human supervision of military robots, critics claim it leaves the door open for the development of autonomous lethal robots that aren’t accountable to meaningful human oversight. “We already don’t understand Microsoft Windows; we’re certainly not going to understand something as complex as a humanlike intelligence,” says Mark Gubrud, a research associate working on robotic and space weapons arms control at Princeton. “Why would we create something like that and then arm it?” Killer robot foes also note that, according to the Pentagon directive, it only takes signatures from two department undersecretaries and the chair of the Joint Chiefs of Staff to green-light the development and use of lethal autonomous technology that targets humans.

“I’m not against autonomous robots—my vacuum is an autonomous robot. We are simply calling for a prohibition on the kill function on such robots.”

Militaries and contractors are already working on combat systems that surpass our current fleet of killer drones by requiring less human control. The US Navy commissioned Northrop Grumman’s X-47B (as yet unarmed) to demonstrate the takeoff and landing capabilities of autonomous unmanned aircraft. Researchers at Carnegie Mellon University have developed an trucklike combat vehicle called the “Crusher,” designed for fire support and medevac, for the Defense Advanced Research Projects Agency. (“This vehicle can go into places where, if you were following in a Humvee, you’d come out with spinal injuries,” said the director of DARPA’s Tactical Technology Office.) The $220 million Taranis warplane, developed by BAE Systems for the United Kingdom, could one day conduct fully autonomous intercontinental missions. And China has been developing its Invisible Sword unmanned stealth aircraft for years.

Yet the technology required for to make an advanced fighting robot is still far from complete. “Our vision and sensing systems on robots are not that good,” Sharkey says. “They might be able to tell difference between a human and a car, but they can be fooled by a statue or a dog dancing on its hind legs, even.”  Experts also say that the technology is nowhere near being able to make crucial distinctions between combatants and noncombatants—in other words, whom it’s okay to kill.

This technological uncertainty has caused some experts to think a preemptive injunction on warbot development is misguided. “We are making legal arguments based entirely on speculation,” says Michael Schmitt, chairman of the international law department at the US Naval War College. (Schmitt recently planned a workshop on the legal issues surrounding killer robots, but sequestration has delayed it.) “Do I have my concerns? Of course. But these systems have not been fielded on the battlefield, nor are they in active development in the US.”

Schmitt argues that existing international law would keep the use of robots from spiraling into a sci-fi nightmare. “If such a system cannot discriminate between civilians and enemy combatants in an environment, then it is therefore unlawful,” he explains. “No one is talking about a George Jetson-type scenario. What we are talking about is going to a field commander and saying, ‘Here’s another system, like a drone, or a frigate, or an F-17.’ If I were a commander, I would know what laws there are, and in what situation I can use it.”

Another side of the debate is over whether killer robots would reduce or increase civilian casualties. The Department of Defense has been funding the research of Georgia Tech roboticist Ronald Arkin, who seeks to design a software system, or “ethical governor,” that will ensure robots adhere to international rules of war. He’s argued that machines will be more effective fighters than humans. “My friends who served in Vietnam told me that they fired—when they were in a free-fire zone—at anything that moved,” Arkin recently told the New York Times. “I think we can design intelligent, lethal, autonomous systems that can potentially do better than that.”

“If a robot commits a war crime, who’s responsible for it?”

Creating an artificial intelligence that could act upon just-war principles or the idea that civilian casualties should be minimized would involve elaborate programming. “That’s kind of what we’re worried about,” says George Lucas, Jr., a professor of ethics and public policy at the Naval Postgraduate School who has worked with Arkin. “Those extraordinarily complex algorithmical systems, they may operate fine 99 percent of the time, but every once and a while they go nuts.” If armed robots are eventually deployed, Lucas says they should be limited to simple and very tightly scripted scenarios, like protecting a no-go zone around a vessel at sea. In a counterinsurgency setting, the sheer number of complicated variables—determining who’s an enemy, ally, or noncombatant—might overwhelm a robot’s capabilities.

The Campaign to Stop Killer Robots suspects that any benefits of battlefield robots might come at the expense of civilians. “Reducing military casualties is a desirable goal, but you shouldn’t do that by putting civilians at risk,” says Goose of Human Rights Watch. “Most roboticists we’ve talked to say we’ll never get to a point that machines will adequately make distinctions between targets, or meet requirements of humanitarian law. Sometimes these decisions require emotions and compassion, and having a machine with attributes necessary for this kind of legal reasoning is not at all likely.”

So far, these questions remain largely hypothetical. But the Campaign to Stop Killer Robots wants to answer them before we find ourselves debating the ethics of a lethal technology that can’t be put back in the box. Should warbots become a reality, who will take the fall for an atrocity committed by a autonomous machine during the course of an operation? “If a robot commits a war crime, who’s responsible for it?” Goose asks. “The commander? The manufacturer? If you can’t hold someone responsible for a war crime, then there’s nothing to deter these war crimes.”

WE'LL BE BLUNT

It is astonishingly hard keeping a newsroom afloat these days, and we need to raise $253,000 in online donations quickly, by October 7.

The short of it: Last year, we had to cut $1 million from our budget so we could have any chance of breaking even by the time our fiscal year ended in June. And despite a huge rally from so many of you leading up to the deadline, we still came up a bit short on the whole. We can’t let that happen again. We have no wiggle room to begin with, and now we have a hole to dig out of.

Readers also told us to just give it to you straight when we need to ask for your support, and seeing how matter-of-factly explaining our inner workings, our challenges and finances, can bring more of you in has been a real silver lining. So our online membership lead, Brian, lays it all out for you in his personal, insider account (that literally puts his skin in the game!) of how urgent things are right now.

The upshot: Being able to rally $253,000 in donations over these next few weeks is vitally important simply because it is the number that keeps us right on track, helping make sure we don't end up with a bigger gap than can be filled again, helping us avoid any significant (and knowable) cash-flow crunches for now. We used to be more nonchalant about coming up short this time of year, thinking we can make it by the time June rolls around. Not anymore.

Because the in-depth journalism on underreported beats and unique perspectives on the daily news you turn to Mother Jones for is only possible because readers fund us. Corporations and powerful people with deep pockets will never sustain the type of journalism we exist to do. The only investors who won’t let independent, investigative journalism down are the people who actually care about its future—you.

And we need readers to show up for us big time—again.

Getting just 10 percent of the people who care enough about our work to be reading this blurb to part with a few bucks would be utterly transformative for us, and that's very much what we need to keep charging hard in this financially uncertain, high-stakes year.

If you can right now, please support the journalism you get from Mother Jones with a donation at whatever amount works for you. And please do it now, before you move on to whatever you're about to do next and think maybe you'll get to it later, because every gift matters and we really need to see a strong response if we're going to raise the $253,000 we need in less than three weeks.

payment methods

WE'LL BE BLUNT

It is astonishingly hard keeping a newsroom afloat these days, and we need to raise $253,000 in online donations quickly, by October 7.

The short of it: Last year, we had to cut $1 million from our budget so we could have any chance of breaking even by the time our fiscal year ended in June. And despite a huge rally from so many of you leading up to the deadline, we still came up a bit short on the whole. We can’t let that happen again. We have no wiggle room to begin with, and now we have a hole to dig out of.

Readers also told us to just give it to you straight when we need to ask for your support, and seeing how matter-of-factly explaining our inner workings, our challenges and finances, can bring more of you in has been a real silver lining. So our online membership lead, Brian, lays it all out for you in his personal, insider account (that literally puts his skin in the game!) of how urgent things are right now.

The upshot: Being able to rally $253,000 in donations over these next few weeks is vitally important simply because it is the number that keeps us right on track, helping make sure we don't end up with a bigger gap than can be filled again, helping us avoid any significant (and knowable) cash-flow crunches for now. We used to be more nonchalant about coming up short this time of year, thinking we can make it by the time June rolls around. Not anymore.

Because the in-depth journalism on underreported beats and unique perspectives on the daily news you turn to Mother Jones for is only possible because readers fund us. Corporations and powerful people with deep pockets will never sustain the type of journalism we exist to do. The only investors who won’t let independent, investigative journalism down are the people who actually care about its future—you.

And we need readers to show up for us big time—again.

Getting just 10 percent of the people who care enough about our work to be reading this blurb to part with a few bucks would be utterly transformative for us, and that's very much what we need to keep charging hard in this financially uncertain, high-stakes year.

If you can right now, please support the journalism you get from Mother Jones with a donation at whatever amount works for you. And please do it now, before you move on to whatever you're about to do next and think maybe you'll get to it later, because every gift matters and we really need to see a strong response if we're going to raise the $253,000 we need in less than three weeks.

payment methods

We Recommend

Latest

Sign up for our free newsletter

Subscribe to the Mother Jones Daily to have our top stories delivered directly to your inbox.

Get our award-winning magazine

Save big on a full year of investigations, ideas, and insights.

Subscribe

Support our journalism

Help Mother Jones' reporters dig deep with a tax-deductible donation.

Donate