The use and development of “killer robots”—or weapons that can select and kill without human intervention—is “unconscionable,” charged a group of Nobel Peace Prize winners in a joint statement released Monday.
Published on the eve of a multi-day United Nations conference in Geneva, Switzerland to discuss the Convention on Certain Conventional Weapons (CCW), otherwise known as the Inhumane Weapons Convention, the group is voicing their support of a pre-emptive global ban on the weapons.
“It is unconscionable that human beings are expanding research and development of lethal machines that would be able to kill people without human intervention,” reads the statement, which was signed by a number of peace organizations and activists including Jody Williams, Archbishop Desmond Tutu, Mairead Maguire and Shirin Ebadi.
The statement continues:
The letter cites a 2012 report by the human rights organization Human Rights Watch (HRW), which drew early attention to the proliferation of these weapon systems. On Monday, HRW along with the Harvard Law School’s International Human Rights Clinic, released another report—Shaking the Foundations: The Human Rights Implications of Killer Robots (pdf)—which argues that fully autonomous weapons would pose a grave threat to the basic human rights of civilians.
According to the report, fully autonomous weapons could be prone to killing people unlawfully because the weapons could not be programmed to handle every situation and there is little prospect that these weapons would possess human qualities, such as judgment, that facilitate compliance with the right to life in unforeseen situations.
HRW also found that the use of “killer robots” would undermine human dignity because “inanimate machines [can] not understand or respect the value of life, yet they would have the power to determine when to take it away.”
Further, the report calls into question whether there could ever be “meaningful accountability” for the actions of a fully autonomous weapon, noting that “both criminal and civil law are ill suited to the task.”
“In policing, as well as war, human judgment is critically important to any decision to use a lethal weapon,” said Steve Goose, arms division director at Human Rights Watch. “Governments need to say no to fully autonomous weapons for any purpose and to preemptively ban them now, before it is too late.”
* * *
The full statement calling for the ban on fully autonomous weapons is below:
In April 2013 in London, a group of nongovernmental organizations – most associated with the successful efforts to ban landmines and cluster munitions – publicly launched the “Campaign to Stop Killer Robots.” Their efforts have helped bring the issue of fully autonomous weapons to a broader audience and spur governments to begin discussions on these weapons this May in Geneva.
We, the undersigned Nobel Peace Prize Laureates, applaud this new global effort and whole-heartedly embrace its goal of a preemptive ban on fully autonomous weapons that would be able to select and attack targets on their own. It is unconscionable that human beings are expanding research and development of lethal machines that would be able to kill people without human intervention.
Not all that long ago such weapons were considered the subject of science fiction, Hollywood and video games. But some machines are already taking the place of soldiers on the battlefield. Some experts in the field predict that fully autonomous weapons could be developed within 20 to 30 years; others contend it could even be sooner. With the rapid development of drones and the expansion of their use in the wars in Afghanistan and Iraq – and beyond, billions of dollars are already being spent to research new systems for the air, land, and sea that one day would make drones seem as quaint as the Model T Ford does today.
Too many applaud the so-called success of drone warfare and extol the virtues of the weapons. While these unmanned aircraft can fly thousands of miles from home base on their own, they still require individuals watching computer screens to fire its weapons and attack a target. Already over 70 countries have drones and many are looking to develop methods to make them ever more autonomous and create new lethal robots that will, in fact, kill human beings on their own.
Those who favor the development of autonomous lethal robots make many arguments on their behalf. They note that such machines do not put soldiers’ lives at risk nor do they tire or become frightened. Emotion would not cloud their decision-making. They also say that ultimately lethal autonomous robots will be cheaper than manned systems and laud that feature in these times of cutting government budgets.
But not everyone supports the arguments. In it very aptly entitled report, “Losing Humanity: The Case Against Killer Robots,” Human Rights Watch outlined legal and other arguments against the development of such weapons. The report says that such robots will have serious challenges meeting tests of military necessity, proportionality and distinction, which are fundamental to the laws of war. Lethal autonomous weapons would also threaten essential non-legal safeguards for civilians. They would not be constrained by the capacity for compassion, which can provide a key check on killing civilians. These arguments were also brought to the fore in the report of the UN special rapporteur on extrajudicial and arbitrary execution, Christoff Heyns, presented to the UN Human Rights Council in May 2013.
Of course a key argument for robotic weapons is that using them could reduce military casualties. On the flip side, many fear that leaving the killing to machines might make going to war easier and shift the burden of armed conflict onto civilians. The use of fully autonomous weapons raises serious questions of accountability. Who should be held responsible for any unlawful actions they commit? The military commander? The company that makes the robot? The company that produces the software? The obstacles to holding anyone accountable are huge and would significantly weaken the power of the law to deter future violations.
While there has been some heated debate about the dangers and possible virtues of such weapons, until now it had almost exclusively occurred among scientists, ethicists, lawyers and military. Even as killer robots loom over our future, there had been virtually no public discussion about the ethics and morality of fully autonomous weapons, let alone the implications and impact of their potential use.
But the work of the campaign is changing that and even in the lead-up to the April 23rd launch of the Campaign to Stop Killer Robots, interest and public awareness had begun to grow. The press has increasingly begun to report on killer robots with both the New York Times and the Wall Street Journal running opinion pieces outlining the moral and legal perils of creating killer robots and calling for public discourse before it is too late.
Lethal robots would completely and forever change the face of war and likely spawn a new arms race. Can humanity afford to follow such a path? We applaud and support the efforts of civil society’s Campaign to Stop Killer Robots to help move us away from a possible future of robotic warfare.
SCROLL TO CONTINUE WITH CONTENT