Menu

Issue Report: Ban on autonomous weapons (killer robots)

Should the development and use of autonomous weapons (killer robots) be banned?

Safety: Do autonomous weapons make the world less safe?

Autonomous weapons would be exploited by terrorists, rogue states

"Autonomous Weapons: an Open Letter from AI & Robotics Researchers," The Future of Life Institute, July 28, 2017:

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Autonomous weapons will be used as weapons of mass destruction

Toby Walsh, professor of artificial intelligence at the University of New South Wales, Australia: “These will be weapons of mass destruction. One programmer and a 3D printer can do what previously took an army of people. They will industrialize war, changing the speed and duration of how we can fight. They will be able to kill 24-7 and they will kill faster than humans can act to defend themselves.”

Banning autonomous weapons would be ineffectual against unclear threat

Evan Ackerman, "Lethal Microdrones, Dystopian Futures, and the Autonomous Weapons Debate," IEEE Spectrum, Nov 15, 2017
“I find it difficult to support an outright ban at this point because I think doing so would be a potentially ineffective solution to a complex problem that has not yet been fully characterized. AI and arms-control experts are still debating what, specifically, should be regulated or banned, and how it would be enforced.”

Autonomous weapons cannot be eliminated, so should be made ethical

Evan Ackerman, "We Should Not Ban ‘Killer Robots,’ and Here’s Why," IEEE Spectrum, July 29, 2015

“We’re not going to be able to prevent autonomous armed robots from existing. The real question that we should be asking is this: Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties on both sides?”

Autonomous weapons ban would be ineffectual as barrier to entry is low

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. The barriers keeping people from developing this kind of system are just too low. Consider the “armed quadcopters.” Today you can buy a smartphone-controlled quadrotor for US $300 at Toys R Us. Just imagine what you’ll be able to buy tomorrow. This technology exists. It’s improving all the time. There’s simply too much commercial value in creating quadcopters (and other robots) that have longer endurance, more autonomy, bigger payloads, and everything else that you’d also want in a military system.”

Robots may be better at avoiding unintended harm, death

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“the most significant assumption that this letter makes is that armed autonomous robots are inherently more likely to cause unintended destruction and death than armed autonomous humans are. This may or may not be the case right now, and either way, I genuinely believe that it won’t be the case in the future, perhaps the very near future. I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement, for an example see page 27 of this) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist.”

Robots can wait to fire until fired upon, unlike humans

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“It’s worth noting that Rules of Engagement generally allow for engagement in the event of an imminent attack. In other words, if a hostile target has a weapon and that weapon is pointed at you, you can engage before the weapon is fired rather than after in the interests of self-protection. Robots could be even more cautious than this: you could program them to not engage a hostile target with deadly force unless they confirm with whatever level of certainty that you want that the target is actively engaging them already. Since robots aren’t alive and don’t have emotions and don’t get tired or stressed or distracted, it’s possible for them to just sit there, under fire, until all necessary criteria for engagement are met. Humans can’t do this.”

Autonomous weapons don't suffer from dangerous emotional responses

Ronald Arkin, "Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature," IEEE, August 5, 2015

“Unmanned robotic systems can be designed without emotions that cloud their judgment or result in anger and frustration with ongoing battlefield events.”

Autonomous weapons can act conservatively, don't have to protect selves

Ronald Arkin, "Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature," IEEE, August 5, 2015

“The ability to act conservatively: i.e., they do not need to protect themselves in cases of low certainty of target identification. Autonomous armed robotic vehicles do not need to have self-preservation as a foremost drive, if at all. They can be used in a self sacrificing manner if needed and appropriate without reservation by a commanding officer. There is no need for a ‘shoot first, ask-questions later’ approach, but rather a ‘first-do-no-harm’ strategy can be utilized instead. They can truly assume risk on behalf of the noncombatant, something that soldiers are schooled in, but which some have difficulty achieving in practice.”

Autonomous weapons can process more data for more accurate actions

Ronald Arkin, "Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature," IEEE, August 5, 2015

“Intelligent electronic systems can integrate more information from more sources far faster before responding with lethal force than a human possibly could in real-time.”

Robots and humans can keep each other accountable

Ronald Arkin, "Warfighting Robots Could Reduce Civilian Casualties, So Calling for a Ban Now Is Premature," IEEE, August 5, 2015

“When working in a team of combined human soldiers and autonomous systems as an organic asset, they have the potential capability of independently and objectively monitoring ethical behavior in the battlefield by all parties, providing evidence and reporting infractions that might be observed. This presence alone might possibly lead to a reduction in human ethical infractions.”

Casual war: Do autonomous weapons make engaging in war too easy, casual?

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“I do agree that there is a potential risk with autonomous weapons of making it easier to decide to use force. But, that’s been true ever since someone realized that they could throw a rock at someone else instead of walking up and punching them. There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century, we’ve got that pretty well figured out. If you want to argue that autonomous drones or armed ground robots will lower the bar even farther, then okay, but it’s a pretty low bar as is. And fundamentally, you’re then placing the blame on technology, not the people deciding how to use the technology.”

Arms race: Absent a ban, will there be a pernicious arms race in autonomous weapons?

There will be an arms race in autonomous weapons with consequences

"Autonomous Weapons: an Open Letter from AI & Robotics Researchers," The Future of Life Institute, July 28, 2017:

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc.”

Feasibility: Is banning autonomous weapons feasible?

It is not feasible to ban the broad category of autonomous weapons

Tom Simonite, "Sorry, banning killer robots just isn't practical", Wired, August 22, 2017

“The group’s warning that autonomous machines ‘can be weapons of terror’ makes sense. But trying to ban them outright is probably a waste of time. That’s not because it’s impossible to ban weapons technologies. Some 192 nations have signed the Chemical Weapons Convention that bans chemical weapons, for example. An international agreement blocking use of laser weapons intended to cause permanent blindness is holding up nicely. Weapons systems that make their own decisions are a very different, and much broader, category. The line between weapons controlled by humans and those that fire autonomously is blurry, and many nations—including the US—have begun the process of crossing it. Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independence—including to kill—irresistible.”

To access the second half of this Issue Report or Buy Issue Report



To access the second half of all Issue Reports or