(Page 5 of 5)
The president said the strikes are only carried out after “consultation with partners” in other countries and with “respect for state sovereignty,” targeting only “terrorists who pose a continuing and imminent threat to the American people” and who cannot be captured, and only when there’s a “near-certainty that no civilians will be killed or injured.”
In effect, Obama argued that the drones comport with all the requirements of the international law of armed conflict. But he and his administration know that this isn’t a universally shared assessment.
“The exponential rise in the use of drone technology in a variety of military and non-military contexts represents a real challenge to the framework of established international law,” said Ben Emmerson, the United Nations’ Special Rapporteur on Counterterrorism and Human Rights, when announcing an inquiry earlier this year “into the civilian impact, and human rights implications of the use [of] drones and other forms of targeted killing for the purpose of counter-terrorism and counter-insurgency.”
The muddy nature of these armed conflicts—Who is legally a terrorist? Who is an insurgent?—has complicated the development of rules for using drones. “The world is facing a new technological development which is not easily accommodated within the existing legal frameworks, and none of the analyses that have been floated is entirely satisfactory or comprehensive,” Emmerson said.
Washington does not get much sympathy with its claim of requiring special latitude to fight a uniquely stateless enemy. “This analysis is heavily disputed by most States, and by the majority of international lawyers outside the United States of America,” Emmerson said. Yet “the plain fact is that this technology is here to stay, and its use in theatres of conflict is a reality with which the world must contend.”
The Future of Drones
The way that the military has struggled with questions about killer robots exposes thorny moral questions about launching drones that can act on their own. Resolving these questions will have vast implications for our future robot cropdusters, cargo-carriers, and city watchers.
Ask any military officer trying to make drone policy today, and he or she will insist that a robot will never, under any circumstances, be given the authority to decide who to kill. A human being will always be touching the loop, even if he’s not entirely in it.
Yet the Defense Department’s own research into autonomous systems suggests otherwise. For example, under the Persistent Close Air Support Program, engineers are looking for ways to speed up the process of sending tactical air support to assist ground forces. It takes about a half-hour now to call in an air strike, and the goal is to whittle that down to six minutes. To do that, drones will have to be programmed to respond with some degree of independence to threats on the ground. There isn’t time to wait for a human to direct fire.
And even if a breathing person is required to make the final call to fire a missile, it’s hard to imagine him overruling the robot. After all, it will be the drone that “sees” the threat on the ground, processes the images and sound, and coordinates the action, all faster than the human being can. If the relevant officer is effectively just following the drone’s lead, isn’t the robot really the one in charge?
That’s the question we’ll all be asking when drones finally take off in America.