Menu

Issue Report: Ban on autonomous weapons (killer robots)

Should the development and use of autonomous weapons (killer robots) be banned?

Safety: Do autonomous weapons make the world less safe?

Autonomous weapons would be exploited by terrorists, rogue states

"Autonomous Weapons: an Open Letter from AI & Robotics Researchers," The Future of Life Institute, July 28, 2017:

“It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Banning autonomous weapons would be ineffectual against unclear threat

Evan Ackerman, "Lethal Microdrones, Dystopian Futures, and the Autonomous Weapons Debate," IEEE Spectrum, Nov 15, 2017
“I find it difficult to support an outright ban at this point because I think doing so would be a potentially ineffective solution to a complex problem that has not yet been fully characterized. AI and arms-control experts are still debating what, specifically, should be regulated or banned, and how it would be enforced.”

Autonomous weapons cannot be eliminated, so should be made ethical

Evan Ackerman, "We Should Not Ban ‘Killer Robots,’ and Here’s Why," IEEE Spectrum, July 29, 2015

“We’re not going to be able to prevent autonomous armed robots from existing. The real question that we should be asking is this: Could autonomous armed robots perform better than armed humans in combat, resulting in fewer casualties on both sides?”

Autonomous weapons ban would be ineffectual as barrier to entry is low

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots. The barriers keeping people from developing this kind of system are just too low. Consider the “armed quadcopters.” Today you can buy a smartphone-controlled quadrotor for US $300 at Toys R Us. Just imagine what you’ll be able to buy tomorrow. This technology exists. It’s improving all the time. There’s simply too much commercial value in creating quadcopters (and other robots) that have longer endurance, more autonomy, bigger payloads, and everything else that you’d also want in a military system.”

Robots may be better at avoiding unintended harm, death

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“the most significant assumption that this letter makes is that armed autonomous robots are inherently more likely to cause unintended destruction and death than armed autonomous humans are. This may or may not be the case right now, and either way, I genuinely believe that it won’t be the case in the future, perhaps the very near future. I think that it will be possible for robots to be as good (or better) at identifying hostile enemy combatants as humans, since there are rules that can be followed (called Rules of Engagement, for an example see page 27 of this) to determine whether or not using force is justified. For example, does your target have a weapon? Is that weapon pointed at you? Has the weapon been fired? Have you been hit? These are all things that a robot can determine using any number of sensors that currently exist.”

Robots can wait to fire until fired upon, unlike humans

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“It’s worth noting that Rules of Engagement generally allow for engagement in the event of an imminent attack. In other words, if a hostile target has a weapon and that weapon is pointed at you, you can engage before the weapon is fired rather than after in the interests of self-protection. Robots could be even more cautious than this: you could program them to not engage a hostile target with deadly force unless they confirm with whatever level of certainty that you want that the target is actively engaging them already. Since robots aren’t alive and don’t have emotions and don’t get tired or stressed or distracted, it’s possible for them to just sit there, under fire, until all necessary criteria for engagement are met. Humans can’t do this.”

Casual war: Do autonomous weapons make engaging in war too easy, casual?

Evan Ackerman, "We should not ban killer robots, and here's why," IEEE Spectrum, July 29, 2016

“I do agree that there is a potential risk with autonomous weapons of making it easier to decide to use force. But, that’s been true ever since someone realized that they could throw a rock at someone else instead of walking up and punching them. There’s been continual development of technologies that allow us to engage our enemies while minimizing our own risk, and what with the ballistic and cruise missiles that we’ve had for the last half century, we’ve got that pretty well figured out. If you want to argue that autonomous drones or armed ground robots will lower the bar even farther, then okay, but it’s a pretty low bar as is. And fundamentally, you’re then placing the blame on technology, not the people deciding how to use the technology.”

To access the second half of this Issue Report or Buy Issue Report



To access the second half of all Issue Reports or