The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a similar vehicle travelling at a similar speed, the choice should hardly matter. A serious injury is a serious injury, whatever its cause. Likewise, a sensible transportation system should minimize the cost of accidents regardless of how they occur.
People are often suspicious of new technology, however. Reacting to public concern, regulators might adopt more strenuous regulatory standards to govern autonomous vehicles than apply to conventional vehicles. Judges will also play an important role in the development of the liability system governing autonomous vehicles. Will they also react negatively to this new technology?
Evidence suggests they will. Some people are keen to begin using fully autonomous vehicles, but safety concerns make most reluctant. Many consumers insist that autonomous vehicles must demonstrate an accident rate that is one-fifth that of human-driven vehicles before they will be comfortable either driving them or sharing the road with them.
In experimental studies in which people evaluate accident vignettes, people react more negatively to accidents caused by autonomous vehicles than to accidents caused by human drivers. People also attribute more culpability to autonomous vehicles that cause accidents and treat such accidents as having inflicted more harm. Thus, even though autonomous vehicles will likely be a game-changer in terms of safety, many experts agree that hesitancy towards the technology will impede widespread adoption of autonomous vehicles.
Animosity towards autonomous vehicles should perhaps not be surprising. Several aspects of human risk perception suggest that people will perceive autonomous vehicles as more threatening than conventional vehicles. People treat unnatural, novel, and involuntarily incurred risks more seriously than familiar, conventional sources of harm. For example, people are more apt to blame defendants who adopt nontraditional medical treatment or investment strategies than those who stick with the tried and true.
Naturalness bias might also affect judgments about autonomous vehicles. People state that they would rather be evacuated from their home due to noxious fumes from a volcano than noxious fumes from an industrial accident and that they would rather get skin cancer from exposure to the sun than from exposure to a tanning bed. So too might accidents that autonomous vehicles cause seem more destructive than accidents that human-driven vehicles cause.
What about judges? Despite their image as sober, deliberative decision makers, judges are human beings, after all. They rely on the many of the same kinds of potentially faulty decision-making strategies concerning risk that adversely affect most people. Judges might approach liability for autonomous vehicles with the same hostility as the general public.
To test this, we conducted two experiments with 933 sitting state and federal trial judges. The judges participating in our research assessed a vignette describing an accident in which a taxi owned by a company with a fleet of both autonomous taxis and human-driven taxis struck a pedestrian crossing the street. For half of the judges, an autonomous taxi failed to detect the pedestrian because sunlight reflecting off of a building fooled its sensors. For the other half of the judges, the accident resulted when sunlight reflecting off of a building distracted its human driver. The circumstances of the accidents and extent of the injuries were otherwise identical.
In the first study, we asked judges to assess a comparative negligence scenario in which they allocated responsibility between the taxi and the pedestrian. In this study, the pedestrian was also at fault because she was texting on her cellphone while jaywalking.
Although the accidents were basically identical, judges assigned an average of 52% of the fault for the accident to the car when it was said to be an autonomous vehicle, as compared to 43% when it was said to be driven by a human. Furthermore, two-thirds of the judges evaluating the autonomous vehicle attributed at least half of the fault to the car, as compared to only half of the judges evaluating the human-driven vehicle.
In the second study, we presented a similar scenario to judges in which the pedestrian was entirely blameless, but in which the compensatory damage award was at issue. Although the materials described the injury identically in both cases, judges awarded an average of $340,000 when an autonomous vehicle caused the accident as opposed to $243,000 when a human-driven vehicle caused it.
Our results suggest that like lay people, judges will disfavor autonomous vehicles. To be sure, in our first study, assigning more fault to an autonomous vehicle might be reasonable. Autonomous vehicles cause accidents when a team of expert engineers plan poorly, but human accidents result from ordinary inadvertence. People rightly expect more from technology firms than from an average human taxi driver. In our second study, however, treating an identical injury as more serious when an autonomous vehicle caused it is likely the product of an intuitive animosity towards a new technology.
Hostility from consumers, regulators and the judiciary is not apt to prevent the widespread adoption of autonomous vehicles. If a new technology ultimately proves to be useful and safer, people will eventually demand it. Judicial hostility towards autonomous vehicles, however, risks unduly delaying or distorting them to the detriment of long-term public welfare.
The full paper is available at here. Rachlinski and Wistrich have conducted decades of research on psychological phenomena that affect trial judges. They will be posting three abbreviated summaries of their results this week. Up tomorrow: the Bizarre Effect of Anchoring.