Don't Fear the Robots! Human Responsibility Will Keep Them in Check.
Lawyers, insurance, and the media put the fear of liability in the hearts of the humans who create our automated tools
The discourse around robots, drones, and other autonomous machines may be setting us up to believe that in the future humans will not be responsible for the behavior of such machines. People's attention focuses on the increasing independence of these technologies from human control and intervention. What happens when an autonomous military weapon decides to attack a target and military personnel don't understand why the machine did what it did, let alone have an opportunity to intervene? Will no one—no human—be responsible? Or will there be some schema in which the machines themselves are responsible?
Experience suggests that existing systems of liability, insurance, and public criticism will make sure that humans take responsibility for their robotic creations.
Autonomous is the term used to describe machines that behave without direct or immediate human control. However, in the context of machines, autonomous seems to be a metaphor rather than a description. When humans behave independently, they exercise their capacity to decide and act, a capacity that derives from having autonomy. Machines now seemingly make decisions and act, but their capacity and the generation of behavior in machines is radically different from what humans do.
In some sense, there is nothing new about machines behaving independently. The thermostats that keep our houses warm and the streetlights that regulate traffic behave independently and without human intervention (once they are put in place and until they break down or need a new battery). When it comes to robots, drones, and unmanned vehicles, what is new is the capacity to make complicated decisions and to learn as they operate. Indeed, software programs can learn new strategies for solving a problem or performing a task as they engage in the process. So, designers and programmers can no longer predict how their software will behave as it achieves a desired result. The learning capacity of autonomous technologies leads some to argue that in the future robots of various kinds—military and domestic—will have so much autonomy that we will not be able to hold humans responsible for how they behave.
To understand and evaluate this argument, the notion of autonomy needs to be unpacked. In fact, delving into how autonomous machines work suggests that autonomy can mean quite different things. Some use the term to refer to what might be called high-level automation. For example, a 2012 Department of Defense report, The Role of Autonomy in DOD Systems, characterized autonomy as "a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, 'self-governing.'" Others use autonomy to refer to a type of operation that is different from automation in that it is not preprogrammed. The 2011 DOD report, Unmanned Systems Integrated Roadmap, takes this stance in treating autonomous systems as those in which human operators and designers don't specify in advance all the steps by which a software program achieves its goals. Another DOD Roadmap report from 2009 suggests that autonomous systems are those in which information goes back and forth between humans and machines, so the machines are somewhat on par with humans in their collaboration and in this sense autonomous.
A simple way to get a handle on this is to think in terms of tasks. We delegate tasks to machines, e.g., the thermostat turns the furnace on and off, the Google search engine finds information on a topic. Complex operations involve multiple tasks. For example, the tasks involved in a military drone operation might be broken down into: navigate to a location, identify potential targets, determine if non-combatants are in the area, send images back to various locations, and attack a target. Typically some tasks are delegated to humans and others to machines (typically software and hardware). Consider, for example, the following delegation of tasks: humans make the decision to send a drone to a location; on its own, the drone navigates to the location, sends images back to various locations; humans determine if there are appropriate targets in the area, humans determine if there is sufficient evidence that there are no noncombatants in the area, humans decide to attack a target; the drone attacks.
Of course, other combinations are possible, combinations in which humans do more of these subtasks or machines do more or less. Presumably in fully autonomous machines all the tasks are delegated to machines. This, then, poses the responsibility challenge. Imagine a drone circulating in the sky, identifying a combat area, determining which of the humans in the area are enemy combatants and which are noncombatants, and then deciding to fire on enemy targets.
Although drones of this kind are possible, the description is somewhat misleading. In order for systems of this kind to operate, humans must be involved. Humans make the decisions to delegate to machines; the humans who design the system make decisions about how the machine tasks are performed or, at least, they set the parameters in which the machine decisions will be made; and humans decide whether the machines are reliable enough to be delegated tasks in real-world situations.
Concerns about "no one" being responsible for autonomous technologies are both misleading and distracting. The future of autonomous technologies would be better served by more public discourse on the responsibility practices that should accompany the development and operation of autonomous systems. For a start, public attention should be focused on risk and reliability. Developers of autonomous technologies are certainly worried about reliability. Generally developers go to some length to make sure that their inventions are safe and reliable. Nevertheless, there is risk and risk is managed, at least in part, by responsibility arrangements. If a designer or user were to deploy an autonomous system that would learn and ultimately behave in unpredictable, incomprehensible, and dangerous ways, we would (or should) hold the designer or the user who deployed the machine responsible for the consequences, as we do with other dangerous technologies.
Framing autonomous technologies in terms of risk is one way to think about the responsibility issues. Responsibility doesn't come out of nowhere. A range of practices convey to individuals and groups what they are responsible for and what might happen to them if they fail to fulfill their responsibility. For example, in systems combining human and machine tasks, the humans involved are told what their responsibilities are and what might happen if they fail to fulfill them. Job descriptions specify what is expected from an individual in a particular position; training manuals and on-the-job training inform individuals about what is expected; and organizational culture may informally reinforce a sense of responsibility. Through these and other practices, individual responsibility is constructed and conveyed.
When it comes to responsibility for the whole system (the combination of human and machine tasks), the practices are different, but it is still a set of practices that convey the responsibility to developers and users of the system. Here legal liability and insurance schemas come into play in informing the humans involved as to what they will be responsible for and what the consequences of failure will be. Informal practices also constitute responsibility for the whole system. For example, when the public through the media holds the military, manufacturers, the government, or engineers responsible for dangerous technologies or technologies that go counter to long held social values, this conveys the responsibility message. We have seen some of this in the case of drones and concerns about warfare at a distance. Through practices of public accountability and legal liability, human beings are held responsible for what technologies do and fail to do. This is precisely why public discourse should be focused on who ultimately has to answer for the consequences of an autonomous machine's conduct, and not on an end to human responsibility
Of course, it is possible that future generations decide that no human is responsible for the behavior of an autonomous system. Increasing autonomy together with increasing benefits may lead them to decide that the risks are worth the benefits. However, if that happens, it won't be because autonomous systems necessitate it. Practices holding humans responsible for autonomous technologies can be developed as they have been for other complicated and dangerous technologies.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I think “autonomous robot” and can hear myself cursing and shouting at automated customer support bots. Gets worse if they eat up another minute of my time to be “courteous”.
Just get out the katana and show them who’s boss.
Web interfaces are far superior when they’re not broken.
This is not a difficult problem. Robots are machines and people have always been responsible for the harm done by their machines. Suppose the mechanical parking brake fails on my neighbors car and it rolls down his driveway and hits my car parked across the street. He is going to be responsible for the damages. The fact that the car did it, not him won’t make any difference. His robot lawn mower malfunctioning and hitting my car would be the same principle.
Ah, but when machines “think” the situation is more, um, akin to that of responsibility for the harm done by a person’s *child*.
Same answer. But what is “thinking”? Does the car think when it slips into gear and rolls down the hill? In a way it does. I don’t see how the complexity of the thinking or the machine makes any difference. Its your machine, you are responsible for controlling it.
Sure, but those places screw people with tort law too. And indeed a lot of people would say the US is too tort happy and makes it too easy to hold someone liable when something bad happens.
Jones v. Tobor the Robot Assassin – Let’s draw straws to see who serves process on Tobor!
Private parties held accountable, sure.
But what happens when politicians, hiding behind bureaucrats making decisions, unleash horrors? Or just do it and brazen it out, such as Truman nuking Japanese cities or any sociopathic politician ever who started a war?
Can you trust these people to not do something terrible with robots?
Of course you can! Doing so might cause a scandal which could affect their chances of reelecttion four years later.
Robots have served me incredibly well making my life easier and much better, and robots have caused me great headaches. Whether their presence has been beneficial or malevolent has rested entirely upon the type of person or persons who made them are and what their intentions were.
The list of anti robotic technology loons has really been growing lately. They’ve even managed to secure a few well known names in science to help give their idiocy the false appearance of sound science. It’s one trick they’ve finely honed in recent years.
In other news, Luddite invents robot machine to destroy robot technology quicker.
Don’t Fear the Robots! Human Responsibility Will Keep Them in Check.
Therein lies the dangers, don’t you think? It is after all 2015 and we still execute people, we’ve started unnecessary wars with little concern to consequential human collateral , while torturing prisoners sanctioned by our leaders. We still cannot give equal rights to all people. In countries all over the world including this one -we have pockets of corrupt government yet, our advanced technology can be hacked by anyone with enough determination including, governments that do it without our knowledge. If left to our own devices we will pollute every parcel of land in the name of the almighty dollar. I think maybe robots will give themselves the responsibility of keeping humans in check and honestly, who would blame them ?
Machines capable of abstract computation are the real robots and probably where the pathway of robotic sentience begins. Even then the average sentient robot can be limited.
I think my concern would lie with a future where human tyranny imposes robots on civilization as an elitist alternative to humanity. The robotic overthrow of the human race isn’t possible without craven humans.
Kudos AC, you get it.
The future needs us, bro.
Whether or not humans take “responsibility” for machines, one thing is virtually certain: Machines will become sophisticated enough to take over all but a very few human jobs. Read “The Lights in the Tunnel” by Martin Ford. All this “Race with the Machines” nonsense is mostly fluff, and offers no tangible solution to this coming crisis.
I make up to $90 an hour working from my home. My story is that I quit working at Walmart to work online and with a little effort I easily bring in around $40h to $86h Someone was good to me by sharing this link with me, so now i am hoping i could help someone else out there by sharing this link… Try it, you won’t regret it!….
………………………………… http://www.NavJob.com