Profs. Jeffrey Rachlinski & Andrew Wistrich Guest-Blogging on "Judging Autonomous Vehicles"

|

I'm delighted to report that we will have two sets of guest-posts this week: In addition to Prof. Robert Leider's posts on criminal law, Profs. Jeffrey Rachlinski & Andrew Wistrich (Cornell) will be guest-blogging this week (starting tomorrow) on their new article, Judging Autonomous Vehicles:

The introduction of any new technology challenges judges to determine how it into existing liability schemes. If judges choose poorly, they can unleash novel injuries on society without redress or stifle progress by overburdening a technological breakthrough.

The emergence of self-driving, or autonomous, vehicles will present an enormous challenge of this sort to judges, as this technology will alter the foundation of the largest source of civil liability in the United States. Although regulatory agencies will determine when and how autonomous cars may be placed into service, judges will likely play a central role in defining the standards for liability for them.

How will judges treat this new technology? People commonly exhibit biases against innovations such as a naturalness bias, in which people disfavor injuries arising from artificial sources. In this paper we present data from 933 trial judges showing that judges exhibit bias against self-driving vehicles. They both assigned more liability to a self-driving vehicle than they would to a human-driven vehicle and treated injuries caused by a self-driving vehicle as more serious than injuries caused by a human-driven vehicle.

NEXT: Legal Conventions and Criminal Law

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. All autonomous vehicles should have a big red OFF button. The auto pilot on the Boeing Max killed 2 plane fulls of people, with the pilots unable to correct the diving.

    1. The problem on the Being Max wasn’t the auto pilot. It was a fly by wire “assist” system that functions even when the pilot is actively controlling the plane.

      Maneuvering Characteristics Augmentation System

      If you want an analogy to a car for the MCAS, it would be a lot closer to electronic traction controls systems than to an autonomous driving system.

    2. In addition to what Matthew said, all autonomous vehicles do have a big OFF button (though it’s not always red). Turning off the car is easy. Resuming manual control is also easy and intuitive.

      Perhaps an even better analogy than Matthew’s might be the new lane-assist technology that applies force through the steering wheel when the car thinks you are drifting out of your lane. My wife’s new car has lane-assist. I quickly became convinced that the car was out of alignment because I was constantly have to “correct” the car to where I wanted it (to avoid potholes on a road that has good lane markings but is otherwise very poorly maintained). When I realized that it was the stupid lane-assist trying to override my judgement, I had to pull to the side of the road and break out the manual to figure out how to turn the damn feature off.

  2. “The introduction of any new technology challenges judges to determine how it into existing liability schemes.”

    Pretty crappy opening sentence.

    1. There wasn’t room; the word didn’t fit. You must acquit.

  3. How does this compare to the existing bias against trucks and corporate vehicles? We have a term “nuclear verdict” to describe the behavior of juries towards such deep-pocketed defendants.

  4. Here is a wild and crazy idea; things cannot have liability, only people can.
    With “autonomous” cars, we have a long list;
    The owner, whether in the vehicle or not
    The passenger closest to whatever controls exist in the vehicle
    The manufacturer (corporate person) of the function that failed
    The system designer(s) of the function that failed
    The programmer(s) of the function that failed
    Trump, just because

  5. The dilemma of liability between an autonomous vehicle versus a human operator is not unlike the anti-vaxx position regarding vaccine “safety”: confusion about causality. By avoiding a positive action (getting vaccinated), a person obtains the illusion of control, thinking his health might be damaged and that he can subsequently avoid infection or fight it off naturally, a dubious proposition. Ultimately the problem is that humans are terrible at judging risk.

    On average, an autonomous vehicle will avoid many of the mistakes a human driver can make. However, its reliability his only as good as its programming and sensors, which will inevitably make some bad choices (because of inadequately anticipating all failure modes) that a human operator would not. That gives a cognitive bias towards the human.

    I’m strongly pro-vax, but given what I know about technology, I’m skeptical that it will ever be safe for vehicles to be unsupervised by humans, to avoid the corner cases of difficult sensor input. High driving is relatively easy. Destination/terminal navigation is not.

  6. I’m reminded of the underlying principle behind Ralph Nader’s _Unsafe at Any Speed_ — about 60 years ago, Nader argued that blaming the drivers in auto accidents wasn’t fair because the accidents were the fault of the vehicles.

Please to post comments