GENEVA — A U.N. expert issued a call Thursday for a global moratorium on the testing, production and use of armed robots that can select and kill targets without human command — a futuristic scenario he said is drawing frightfully near.
“War without reflection is mechanical slaughter,” said Christof Heyns, a United Nations special investigator.
“A decision to allow machines to be deployed to kill human beings worldwide — whatever weapons they use — deserves a collective pause,” he told the U.N.’s Human Rights Council, based in Switzerland.
No countries use such weapons, but the technology is available or soon will be, Heyns said.
The United States, Britain, Israel and South Korea already use technologies that are seen as precursors to fully autonomous systems. Little is known about Russian and Chinese work on the technology.
“My concern is that we may find ourselves on the other side of a line, and then it is very difficult to go back,” Heyns said in an interview. “If there’s ever going to be a time to regulate or stop these weapons, it’s now.”
He urged the council to set up a high-level panel to report within a year on the state of the art in “lethal autonomous robotics” and to assess whether international laws are adequate to control their use.
Preparations to introduce armed robots raise “far-reaching concerns about the protection of life during war and peace,” he said in a report to the council. “This includes questions of whether robots will make it easier for states to go to war,” by distancing people from decisions to kill.
Some nations active in developing such weapons have pledged not to deploy them for the foreseeable future, Heyns said. He noted a U.S. Defense Department directive issued in November that bans the use of lethal force by fully autonomous weapons for up to 10 years, unless specifically authorized by senior officials. The directive identified potential technological failures, which Heyns said underlined the need for caution.
On the other hand, he told the council, “it is clear that very strong forces — including technology and budgets — are pushing in the opposite direction.”
Several nongovernmental organizations and human rights groups are campaigning to ban fully autonomous weapons, aiming to pre-empt their deployment just as a ban did for blinding laser weapons. Discussions are underway with a number of governments that might be willing to take the lead in drafting a treaty, said Steve Goose of Human Rights Watch.
Supporters of the robots say they would offer a number of advantages: They process information faster than humans, and they are not subject to fear, panic, thirst for revenge or other emotions that can cloud human judgment. Robots, they say, also can be used to gather more accurate battlefield data, which could help direct fire more precisely and thus save lives.
A report by Human Rights Watch and Harvard Law School cites a U.S. Air Force assessment that “by 2030 machine capabilities will have increased to the point that humans have become the weakest component in a wide array of systems and processes.”
Human rights groups dispute the ability of robots to meet the requirements of international law, such as distinguishing between civilians and combatants and judging proportionality, or whether an act’s likely harm to civilians exceeds the military advantage to be gained by it.
Moreover, they say, in the event that a killer robot violates international laws — by slaughtering civilians, say — it is unclear who could be held responsible or punished.
“It is possible to halt the slide toward full autonomy in weaponry before moral and legal boundaries are crossed,” said Goose, of Human Rights Watch, “but only if we start to draw the line now.”