Science & Technology

Don't Fear the Robots! Human Responsibility Will Keep Them in Check.

Lawyers, insurance, and the media put the fear of liability in the hearts of the humans who create our automated tools

|

The discourse around robots, drones, and other autonomous machines may be setting us up to believe that in the future humans will not be responsible for the behavior of such machines. People's attention focuses on the increasing independence of these technologies from human control and intervention. What happens when an autonomous military weapon decides to attack a target and military personnel don't understand why the machine did what it did, let alone have an opportunity to intervene? Will no one—no human—be responsible? Or will there be some schema in which the machines themselves are responsible?

Experience suggests that existing systems of liability, insurance, and public criticism will make sure that humans take responsibility for their robotic creations.

Autonomous is the term used to describe machines that behave without direct or immediate human control. However, in the context of machines, autonomous seems to be a metaphor rather than a description. When humans behave independently, they exercise their capacity to decide and act, a capacity that derives from having autonomy. Machines now seemingly make decisions and act, but their capacity and the generation of behavior in machines is radically different from what humans do.

In some sense, there is nothing new about machines behaving independently. The thermostats that keep our houses warm and the streetlights that regulate traffic behave independently and without human intervention (once they are put in place and until they break down or need a new battery). When it comes to robots, drones, and unmanned vehicles, what is new is the capacity to make complicated decisions and to learn as they operate. Indeed, software programs can learn new strategies for solving a problem or performing a task as they engage in the process. So, designers and programmers can no longer predict how their software will behave as it achieves a desired result. The learning capacity of autonomous technologies leads some to argue that in the future robots of various kinds—military and domestic—will have so much autonomy that we will not be able to hold humans responsible for how they behave.  

To understand and evaluate this argument, the notion of autonomy needs to be unpacked. In fact, delving into how autonomous machines work suggests that autonomy can mean quite different things. Some use the term to refer to what might be called high-level automation. For example, a 2012 Department of Defense report, The Role of Autonomy in DOD Systems, characterized autonomy as "a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, 'self-governing.'" Others use autonomy to refer to a type of operation that is different from automation in that it is not preprogrammed. The 2011 DOD report, Unmanned Systems Integrated Roadmap, takes this stance in treating autonomous systems as those in which human operators and designers don't specify in advance all the steps by which a software program achieves its goals. Another DOD Roadmap report from 2009 suggests that autonomous systems are those in which information goes back and forth between humans and machines, so the machines are somewhat on par with humans in their collaboration and in this sense autonomous.

Customs and Border Protection

A simple way to get a handle on this is to think in terms of tasks. We delegate tasks to machines, e.g., the thermostat turns the furnace on and off, the Google search engine finds information on a topic. Complex operations involve multiple tasks. For example, the tasks involved in a military drone operation might be broken down into: navigate to a location, identify potential targets, determine if non-combatants are in the area, send images back to various locations, and attack a target. Typically some tasks are delegated to humans and others to machines (typically software and hardware). Consider, for example, the following delegation of tasks: humans make the decision to send a drone to a location; on its own, the drone navigates to the location, sends images back to various locations; humans determine if there are appropriate targets in the area, humans determine if there is sufficient evidence that there are no noncombatants in the area, humans decide to attack a target; the drone attacks.  

Of course, other combinations are possible, combinations in which humans do more of these subtasks or machines do more or less. Presumably in fully autonomous machines all the tasks are delegated to machines. This, then, poses the responsibility challenge. Imagine a drone circulating in the sky, identifying a combat area, determining which of the humans in the area are enemy combatants and which are noncombatants, and then deciding to fire on enemy targets. 

Although drones of this kind are possible, the description is somewhat misleading. In order for systems of this kind to operate, humans must be involved. Humans make the decisions to delegate to machines; the humans who design the system make decisions about how the machine tasks are performed or, at least, they set the parameters in which the machine decisions will be made; and humans decide whether the machines are reliable enough to be delegated tasks in real-world situations. 

Concerns about "no one" being responsible for autonomous technologies are both misleading and distracting. The future of autonomous technologies would be better served by more public discourse on the responsibility practices that should accompany the development and operation of autonomous systems.   For a start, public attention should be focused on risk and reliability.  Developers of autonomous technologies are certainly worried about reliability. Generally developers go to some length to make sure that their inventions are safe and reliable. Nevertheless, there is risk and risk is managed, at least in part, by responsibility arrangements. If a designer or user were to deploy an autonomous system that would learn and ultimately behave in unpredictable, incomprehensible, and dangerous ways, we would (or should) hold the designer or the user who deployed the machine responsible for the consequences, as we do with other dangerous technologies.

Framing autonomous technologies in terms of risk is one way to think about the responsibility issues.  Responsibility doesn't come out of nowhere. A range of practices convey to individuals and groups what they are responsible for and what might happen to them if they fail to fulfill their responsibility. For example, in systems combining human and machine tasks, the humans involved are told what their responsibilities are and what might happen if they fail to fulfill them. Job descriptions specify what is expected from an individual in a particular position; training manuals and on-the-job training inform individuals about what is expected; and organizational culture may informally reinforce a sense of responsibility. Through these and other practices, individual responsibility is constructed and conveyed. 

When it comes to responsibility for the whole system (the combination of human and machine tasks), the practices are different, but it is still a set of practices that convey the responsibility to developers and users of the system. Here legal liability and insurance schemas come into play in informing the humans involved as to what they will be responsible for and what the consequences of failure will be. Informal practices also constitute responsibility for the whole system. For example, when the public through the media holds the military, manufacturers, the government, or engineers responsible for dangerous technologies or technologies that go counter to long held social values, this conveys the responsibility message. We have seen some of this in the case of drones and concerns about warfare at a distance. Through practices of public accountability and legal liability, human beings are held responsible for what technologies do and fail to do. This is precisely why public discourse should be focused on who ultimately has to answer for the consequences of an autonomous machine's conduct, and not on an end to human responsibility

Of course, it is possible that future generations decide that no human is responsible for the behavior of an autonomous system. Increasing autonomy together with increasing benefits may lead them to decide that the risks are worth the benefits. However, if that happens, it won't be because autonomous systems necessitate it. Practices holding humans responsible for autonomous technologies can be developed as they have been for other complicated and dangerous technologies.