The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Judging Autonomous Vehicles
Research conducted with sitting judges suggests that autonomous vehicles will be judged more harshly than conventional vehicles.
Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a similar vehicle travelling at a similar speed, the choice should hardly matter. A serious injury is a serious injury, whatever its cause. Likewise, a sensible transportation system should minimize the cost of accidents regardless of how they occur.
People are often suspicious of new technology, however. Reacting to public concern, regulators might adopt more strenuous regulatory standards to govern autonomous vehicles than apply to conventional vehicles. Judges will also play an important role in the development of the liability system governing autonomous vehicles. Will they also react negatively to this new technology?
Evidence suggests they will. Some people are keen to begin using fully autonomous vehicles, but safety concerns make most reluctant. Many consumers insist that autonomous vehicles must demonstrate an accident rate that is one-fifth that of human-driven vehicles before they will be comfortable either driving them or sharing the road with them.
In experimental studies in which people evaluate accident vignettes, people react more negatively to accidents caused by autonomous vehicles than to accidents caused by human drivers. People also attribute more culpability to autonomous vehicles that cause accidents and treat such accidents as having inflicted more harm. Thus, even though autonomous vehicles will likely be a game-changer in terms of safety, many experts agree that hesitancy towards the technology will impede widespread adoption of autonomous vehicles.
Animosity towards autonomous vehicles should perhaps not be surprising. Several aspects of human risk perception suggest that people will perceive autonomous vehicles as more threatening than conventional vehicles. People treat unnatural, novel, and involuntarily incurred risks more seriously than familiar, conventional sources of harm. For example, people are more apt to blame defendants who adopt nontraditional medical treatment or investment strategies than those who stick with the tried and true.
Naturalness bias might also affect judgments about autonomous vehicles. People state that they would rather be evacuated from their home due to noxious fumes from a volcano than noxious fumes from an industrial accident and that they would rather get skin cancer from exposure to the sun than from exposure to a tanning bed. So too might accidents that autonomous vehicles cause seem more destructive than accidents that human-driven vehicles cause.
What about judges? Despite their image as sober, deliberative decision makers, judges are human beings, after all. They rely on the many of the same kinds of potentially faulty decision-making strategies concerning risk that adversely affect most people. Judges might approach liability for autonomous vehicles with the same hostility as the general public.
To test this, we conducted two experiments with 933 sitting state and federal trial judges. The judges participating in our research assessed a vignette describing an accident in which a taxi owned by a company with a fleet of both autonomous taxis and human-driven taxis struck a pedestrian crossing the street. For half of the judges, an autonomous taxi failed to detect the pedestrian because sunlight reflecting off of a building fooled its sensors. For the other half of the judges, the accident resulted when sunlight reflecting off of a building distracted its human driver. The circumstances of the accidents and extent of the injuries were otherwise identical.
In the first study, we asked judges to assess a comparative negligence scenario in which they allocated responsibility between the taxi and the pedestrian. In this study, the pedestrian was also at fault because she was texting on her cellphone while jaywalking.
Although the accidents were basically identical, judges assigned an average of 52% of the fault for the accident to the car when it was said to be an autonomous vehicle, as compared to 43% when it was said to be driven by a human. Furthermore, two-thirds of the judges evaluating the autonomous vehicle attributed at least half of the fault to the car, as compared to only half of the judges evaluating the human-driven vehicle.
In the second study, we presented a similar scenario to judges in which the pedestrian was entirely blameless, but in which the compensatory damage award was at issue. Although the materials described the injury identically in both cases, judges awarded an average of $340,000 when an autonomous vehicle caused the accident as opposed to $243,000 when a human-driven vehicle caused it.
Our results suggest that like lay people, judges will disfavor autonomous vehicles. To be sure, in our first study, assigning more fault to an autonomous vehicle might be reasonable. Autonomous vehicles cause accidents when a team of expert engineers plan poorly, but human accidents result from ordinary inadvertence. People rightly expect more from technology firms than from an average human taxi driver. In our second study, however, treating an identical injury as more serious when an autonomous vehicle caused it is likely the product of an intuitive animosity towards a new technology.
Hostility from consumers, regulators and the judiciary is not apt to prevent the widespread adoption of autonomous vehicles. If a new technology ultimately proves to be useful and safer, people will eventually demand it. Judicial hostility towards autonomous vehicles, however, risks unduly delaying or distorting them to the detriment of long-term public welfare.
The full paper is available at here. Rachlinski and Wistrich have conducted decades of research on psychological phenomena that affect trial judges. They will be posting three abbreviated summaries of their results this week. Up tomorrow: the Bizarre Effect of Anchoring.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
An autonomous vehicle accident is more likely to be the responsibility of the manufacturer, and could indicate that other vehicles of the same design and model could have the same problem. Therefore, the courts need to demand different penalties and corrections than, say, a drunk driver.
I had the same thought: Human flaws are basically baked in, computer flaws are the result of design decisions.
So it's understandable that accidents that externally look the same get treated as inadvertent for humans, but culpable for machines. The machine could have been designed to not make that mistake, after all. Like that bag lady who got run over because Uber had shut off the automatic braking to avoid false positives. A human decision had led to the death.
If we reach a point where you can purchase upgrades to your reflexes, or better vision, I expect human accidents due to lack of such upgrades will be viewed in more the same way, just as today we treat impairment due to use of alcohol as making one guilty, not innocent, because drinking was itself a choice.
In the scenario presented, sunlight fooling a taxi sensor demonstrates poor engineering design. The sun is going to be in all sorts of positions. It is also going to rain, snow, the road will get icy, and so on. During the winter, in the northeast, they put so much salt on the road it cakes my windshield during long drives, blocks my radar, and shuts down my cruise control.
I would have definitely assigned more "culpability" to the car, not because of technological aversion, but because I think it shows the taxi was not adequately tested/designed for operating conditions.
I'm reminded of the problem with the B-17 where exhausted pilots confused the switch to lower the flaps with the switch to lower the landing gear -- and eventually this got realized as a design issue.
That said, our current approach to OUI is unreasonable because a woman in the second half of pregnancy -- and arguably on her period -- is a far less competent driver than an adult male with a .08 BAC. Likewise, there are all kinds of normally prescribed drugs that have a far greater impediment on operation than a .08 BAC, let alone basic fatigue.
The issue with our current OUI laws is the presumption of impairment -- and in most states the allegation is not "intoxication" but merely having an "excessive blood alcohol level."
OUI may have been a real problem in the 1970s -- I'll grant that.
But the precedents that we have established over the past 30 years should cause concern to any objective person who believes in civil liberties.
Like I said -- a pregnant or menstruating woman is arguably less able to safely operate a motor vehicle. What are the implications of criminal charges based on THAT????
See: https://medium.com/swlh/the-flying-fortress-fatal-flaw-694523359eb
No, I don't make stuff up....
I disagree. If the source of the problem is systemic (which would be the case if it was the manufacturer's fault), then the manufacturer already has a strong incentive to find and correct the problem - they don't want to get hit with multiple suits for the same problem. The individual drunk driver, on the other hand, has no scalable incentives.
Mathematically, the two factors should cancel out - the manufacturer's greater responsibility to avoid duplication is exactly balanced by the greater risk of duplication.
To the comment above assigning more culpability to the car because variable visibility conditions are known, how is that any different from your culpability as a human? You also know that visibility conditions are highly variable and that the sun could be in your eyes during that part of your drive. You are letting the human driver off the hook unfairly.
I am not sure that "an accident rate that is one-fifth that of human-driven vehicles" is unreasonable. That might even be too high.
First of all, about 30 percent of fatal crashed involve a an impaired driver (drinking, drugs), so eliminate those off the top. Then we have distracted driving (another 6%), speeding (26%), and fatigue. Half of all crash deaths happen Friday, Saturday, and Sunday. Fourth of July is the worst day of the year.
In other words, most auto fatalities are due to failures of the human, not mechanical failures of the car.
Machines don't get tired, don't daydream, don't drink, and obey the speed limit. So the correct question is: what is the appropriate mechanical failure rate (including software failure). Algorithms are very good at handling cases for which they have been programmed. So the liability issue in my mind becomes: What are they programmed to handle. For example if they are not programmed to handle bicycles in a dark residential neighborhood, that seems like a problem. Are they appropriately programmed to adjust to rainy, icy, snowy, or windy conditions? Machine learning algorithms are very good at average cases, its the outliers (uncommon cases) which will cause accidents.
Obviously there cannot be a rule for everything, however one can certainly program a lot of rules. My feeling is that the mechanical failure rate of autonomous vehicles should be well below "one-fifth that of human-driven vehicles" Not zero, but maybe 1% easily.
My conclusion is not because I am technologically averse. Quite the opposite, I write a lot of code. My conclusion is based on the fact that I know exactly what technology is capable of.
Let me ask a slightly different question.
Why should we go easier on human drivers who make mistakes than human programmers who make mistakes?
Well, because one person making a mistake driving causes one accident.
One programmer making a mistake programming causes a whole class of accidents.
If I fail to take into account that there's a hill to my left reducing visibility when pulling into an intersection, and so don't check to my left immediately before advancing, I cause one accident.
If I'm programming a self-driving system, and program in that mistake, every car using my program will make it, resulting in many accidents.
Programmers' mistakes scale.
But, are the programmers themselves being held accountable here? I don't think so, I think it's the companies employing them.
Good point.
But as dwb68 observes above, humans fail in predictable and patterned ways. Sure, each individual human makes the decision to speed or not independently of other humans, but they do so so frequently that it's a systemic risk in the same way that a programming error is.
true, but the being who designed humans is not subject to the jurisdiction of our legal system. 🙂
Of course you are correct that a design flaw could cause many deaths, but I thought we were talking about per-death comparisons.
Let's suppose a Brett's Bad Driving death is compensated at $243K. If Brett's Bad Programming causes 50 deaths, I'd argue the theoretically correct liability is just 50 x $243K, not 50 x $243K x 1.3 AI enhancement penalty.
Brett,
You're an engineer. Do you really expect six sigma reliability from mediocre engineers UNLESS you boots the engineering input and subsequent hardware design and subsequently the hardware price by the same factors that we see in the aviation industry.
Moreover, the arguments above including yours seem to suggest the the vehicle has perfect sensors and ultra-high precision measurements of externalities and that there are unique non-fuzzy decision options.
I doubt that is the case especially given projections of the state of technology (including AI) over the next decade.
I think it depends how you frame "go easier" $243k to a family is a lot bigger as a percentage of income than $340k to a company like Tesla.
All the factors you are describing are exactly why you'd want to move away from humans driving cars to autonomous vehicles, though. Even if an autonomous vehicle was only roughly as safe as a "normal" human, the car is never going to get drunk or tired or overly excited by a holiday weekend so overall we'd be a lot safer. Demanding that the autonomous vehicle is 5x safer means a bunch of people are going to die who wouldn't have if we would have started using them at 2x or 3x safety factors.
If you add up all the listed "causes" of accidents or deaths you get a number over 100%. Drunk driver slams into a pole. NHTSA can make four statistics out of that: he was drunk, he was speeding, he wasn't wearing a seat belt, and he was distracted. If you imagine for simplicity each of those is a third, a state can then apply for separate grants to reduce deaths by a third by enforcing distracted driving laws, a third by enforcing seat belt laws, a third by running speed traps, and a third with DUI roadblocks. Get ready for the zombie apocalypse.
I disagree. Self-driving cars don't have to be better than the best drivers to be useful. They don't even have to be better than the average driver to be a net gain for society. Self-driving cars should be encouraged as soon as they are better than the worst driver on the road.
Well, yes but only if you are actually replacing the worst drivers. If it's random replacement, then the AI cars need to be better than the average driver. Assuming you're doing utilitarian body count arithmetic, of course, and ignoring the ancient debate on whether such arithmetic is even valid.
It won't be a random replacement, though. The first and most common replacement will be the worst drivers - or at least, the responsible drivers who recognize that they are impaired. Think of the couple that can have a self-driven ride home rather than risk driving "a little tipsy" because it's too much trouble to get an uber then come back for the car the next day. Or think of the seniors who can stay independent and mobile longer instead of the kids having to take their keys away.
We'll also tend to replace the drivers who drive too much and too late - people like me who run an above average chance of falling asleep at the wheel. On the other hand, the people who really like driving and are presumably better at it will tend not to opt for self-driving cars.
It won't be a perfect replacement. Some of the really bad drivers will not replace themselves. And some of the really good drivers will opt for convenience. But in general, I believe that the distribution will be skewed toward the replacement of drivers on the bad end of the spectrum.
Police officer A, who is sober, arrests drunk driver B and drives him to the police station. Except that the cruiser slides off the road on black ice and both are killed. This is an OUI fatality because the passenger was drunk.
It gets worse -- if the passenger (who wasn't driving) has a BAC of 0.01 or above, the accident is listed as "alcohol related."
That's why the federal statistics are bullshyte....
Give it time. When automated elevators first appeared, people were afraid to ride in them without an operator present. Not sure how long that period lasted, but few people are afraid of elevators these days, and the ones that are are afraid regardless of whether there's an operator or not.
Also, once autonomous vehicles are shown to have a much lower accident rate than human piloted vehicles it's only a matter of time before automated driving is required and human driving is restricted to the racetrack or other controlled arena, making the difference moot. Bottom line: autonomous vehicles will be held to a higher standard than human operated vehicles, and that's probably as it should be since at least in principle they should be much safer. Eventually...
Automatically starting and stopping at arbitrary points on a fixed track in a closed, relatively predictable environment is a lot more like trams or monorails, which have been automated in large numbers for some time.
Freeform driving is a completely different class of problem. The number of unexpected things that can happen is nearly infinite.
... which has nothing to do with ah...Clem's point about the history of adoption of new technologies.
- Elevators were invented. The first ones needed operators.
- People insisted that elevators have operators even after the technology had made that need obsolete.
- The insistence on operators faded. Now, it is close to impossible to find an elevator with an operator.
ah...Clem asserts (and I agree) that operators in cars will follow the same social progression.
And my point is that this isn't anywhere conceptually close to an elevator or horseless carriage situation, so a generic observation about new technologies doesn't apply. Just shrugging our shoulders and saying "meh... technology advances -- they'll get over it just like they always do" doesn't acknowledge the rightfully unique set of mental barriers posed by the thought of flooding the streets with giant masses of metal being propelled at fatal speeds by a nameless, faceless, unaccountable, and generally unauditable system with zero human control.
People got to feel better about elevators because they watched them repeatedly and reliably perform a very limited set of functions over and over for a long time, and because if they wanted to, they could look at the design and components and understand how they were designed to work together and how they would respond safely to a limited and generally predictable set of failures.
There's no corollary in a situation like this, where most accidents are caused by unique and unexpected circumstances. And due to the AI/neural net approach that AV developers are pursuing, there's no way to really explain why the the algorithms make the decisions they do, and thus no way to explain why a safe reaction in situation X means it will also react safely in situation Y.
On the contrary, it is conceptually identical. Yes, the decision complexity is higher. So is the available computational power to make those decisions. Your claims that "this time it's different" are identical to the claims made during every other controversial advance in technology throughout history. To defend that statement, you need to explain why you're right but every single predecessor making the same statement was wrong.
"Judges will also play an important role in the development of the liability system governing autonomous vehicles."
OOOF!
Not sure I can accept this statement.
Judges don't "develop" anything, i.e. they are not pro-active in creating a system.
Sure, they're reactive but we can't say they should be part of the development.
And they also react more to three people killed by a drug in a Phase III study than they do to the hundreds of thousands a year who die from the disease they're trying to cure, slowing development.
Nobody wants to be the guinea pig who suffers, but society as a whole is slaughtered, net effect, by this precautionary principle when totalled vs. the ongoing carnage of disease year over year.
dwb68, I'd like to know where you got your numbers?
I believe that there is a Court issue right now over car owners and mechanics being able to access the code on car computers. How would this apply to autonomous vehicles? I have friends who pull the chip out of their car's computer and modify the code for performance. The turning off of the "automatic braking" system was mentioned. What happens when the owner of the car starts modifying the car? Who has the liability then?
I design industrial machinery for a living. Several years ago I was involved in an incident where a customer's employee was severely injured when he disabled or removed several safety features to get more cycles per hour. He was being paid a bonus for excess production. This never went to Court. The lawyers worked out a settlement. There's a lot to be worked out before this should ever happen.
National Highway Traffic Safety Admin puts out numbers and publications https://crashstats.nhtsa.dot.gov/#!/
For example find the 2019 Distracted Driving 2019 publication, "Nine percent of fatal crashes, 15 percent of injury crashes, and 15 percent of all police-reported motor vehicle traffic crashes in 2019 were reported as distraction-affected crashes"
My numbers on distracted driving might have been a little old.
You can also find things like "Alcohol-Impaired Driving" and "Drowsy driving"
Also, as to your comment about modifying the ECU or programming in the vehicle, its clearly the responsibility (in my mind) of whomever reprogrammed the car to make it unsafe.
dwb68, I certainly agree with you that's how it *should* be.
But I've read about too many lawsuits that boil down to "you should have tried harder to stop that person from misusing your stuff" or even "you should have tried harder to stop ME from misusing your stuff".
My grandmother didn't trust electricity. Early 60's. I does take a while to catchup with technology.
Self driving vehicles are going to take time. I just read an article about an autonomous semi taking 12?hours off a 950 mile trip. I see that as the beginning of real application. Long haul shuttle driving on controlled access roadways with few anomalies.
In rural america we have been experimenting with autonomous tractors. There is a specific task that is very promising. Unloading a combine as it harvests. The current system as a driver and large auger wagon catching from the combine, taking to the edge of the field and filling a semi. With combines getting bigger and bigger the "art" of driving along side the combine, at speed with tolerances of less that a few feet, it takes a very experienced driver. With fields being fully GPS mapped down to within centimeter tolerances, the combine self driving(with an operator) and an autonomous catch cart, accidents are all but eliminated.
We already are doing a lot. We already have lane sensors, adaptive seed control and other systems that take human error and judgement away from the operator.
We are generating hundreds of thousands hours of experience with autonomous vehicles. We will get there.
I think the legal solution is obvious. Automated car manufacturers need to become the insurers. That gives (a) a motive to make the designs as good as possible to reduce claim payout, and (b) a way to pass on sky high jury awards to all drivers in the form of premiums.
There have already been suggestions that Tesla could become a highly competitive insurer. Tesla has the benefit of tons of data gathered by cars spying on the drivers. That allows them to know who speeds, who drifts through stop signs, who gets to close to other cars or obstacles, who drives while impaired. They could use that to give cheaper rates to good drivers and punitive rates to bad drivers. If they do that, then they could contribute to auto safety in another way by motivating bad drivers to change their behaviors.
At the least, we should not analyze the future using assumptions that the landscape of the playing field will remain unchanged.
Related issue - I'm in a group working on a sensor device to continuously monitor a critical component on railroad equipment. Currently the industry relies on periodic inspections and safety-by-design, and there are (let's say) 50 failures a year that result in an economically significant accident or worse.
Suppose this new sensor catches 70% of bad components. If and when it's deployed, I'd like to think we prevented 35 accidents a year. But it's clear from the post and the some of the comments here that many of you will see us as "causing" the 15 accidents we failed to prevent.
And if you put it that way am I not morally obligated to stop working on it right now?
I think if you insist on a 15% error rate, there is some flaw in the reasoning at least from a strictly utilitarian point of view.
Edit: I meant 15% error rate for automated vehicles relative to human driven vehicles.
No, by the reasoning here, you're obligated to finish the work, because abandoning it is a decision to cause the 35 accidents it would have caught.
Of course in real life I agree with you and continue to accept my paycheck without any feeling of guilt.
So, if we insist on automated vehicles being 15% as dangerous as human driven, either through direct regulation or by setting up a liability system that effectively enforces it, and that delays implementation, aren't we effectively "causing" 85% of human accidents while we wait? Seems to me a classic case of making the perfect the enemy of the good.
Well, yes, I agree, and think that in fact automated vehicles ought to be accepted as soon as they're as safe as we would require of human drivers. On a provisional basis once they're as safe as teen drivers, and unrestricted once they're as safe as a moderately experienced adult driver.
In fact, I'd say self-driving vehicles are probably already safer than human drivers for restricted access highways, though they should revert to human control if work zones are encountered.
Liability does not really work like that. No one is "enforcing" a 15% failure rate. Liability simply sets a system where IF you fail, you get penalized. Lots of people/companies fail anyway (speed, etc) and accept the liability/penalty, viewing it more like a voluntary tax. Lots of companies consider environmental or labor law penalties simply a cost of doing business. There are also non-monetary reputational penalties businesses consider.
I also think you have to be careful how you frame penalties. In the comparison above, the author compares $243k to a human vs $340k to a company like Tesla. I think, as a percentage of revenue (income), the company is probably getting a much lower penalty, the deterrence effect is much lower.
Large companies often get multi billion dollar fines no human would ever receive: https://www.pbs.org/wgbh/frontline/article/bp-to-pay-record-4-5-billion-for-2010-gulf-oil-spill/
Thanks and I accept all that. What I mean by "effectively" is that the liability/penalties are high enough that we don't start seeing a lot of autonomous vehicles until they reach that safety level. Which from a strict total body count point of view is too late, you'd want to seem them as soon as they're even a little safer than the average driver.
Of course I'm making a lot of simplifications both ethical and economic, as you point out companies and individuals make different calculations and have different levels at which they'd be deterred from going on the road. Really just making the simple point that insisting on too high a level of safety actually costs lives.
I don't think I am saying that.
For all the touted improvements in safety attributed to autonomous vehicles, I remain a serious skeptic. I have been dealing directly with various computers ever since my Lady acquired an Apple II, back when 256k was a lot of memory. Co punters are wonderful tools, right up until they suddenly become a plague. To err is human. To repeat that error hundreds of thousands of times in a second requires a computer.
Frankly, what I envision happening is a busy commute one morning, and suddenly thousands of copies of the same model of autonomous vehicle suffer from a long undiscovered bug, and turn right in traffic for no good reason.
Whereupon autonomous vehicles are regulated to a fare-the-well y posturing Congressional nitwits, and thereby become too expensive to be at all practical.
Same here. I'm actually hard pressed to recall anything at the time (~1980) that had more than 64k.
Words to live by, then and now.
Obviously in regards to other vehicles on the road, average rates of accidents are a reasonable thing to look at. (You have no control over other drivers, so if autonomous vehicles do have a lower accident rate than human driven vehicles, that would be a logical reason to be okay with them on the road).
But in regards to a willingness to be *driven* by an autonomous vehicle, that's a very different analysis. The average accident rate is no longer particularly relevant - your own personal driving safety record is far more important. Just because an autonomous vehicle is safer than the average driver, doesn't mean its safer than you. So even if autonomous cars achieve 1/5th the accident rate of drivers, some drivers will still be better than that.
And of course, people overestimate their capabilities, so for every person who's a safer driver than the autonomous vehicle, there's probably 5 who think they are (even when they aren't).
As such, i might expect a significant difference between 'people who think autonomous cars on the road are okay' and 'willing to be driven by one'. (At least among a polled individual with a minimum amount of sophistication in their thinking. Anyone who is modeling other drivers as 'just like them' probably won't distinguish).
Fwiw, I'm always at least a little uncomfortable being driven by someone else, even when i know they're a very safe driver. The driver being an AI would be unlikely to change that.
Do you ride on buses? trains? planes?
Part of the psychology here is being aware what mistakes the person could make. I know roughly what driving training my mother-in-law has; even more importantly, I can see in real time what risks she is taking and know her capacity for straying off task. In the case of the airline pilot I just tell myself he's got superior knowledge and personal integrity that I can't possibly understand and then try to think about something else.
Planes - yes, sometimes. I don't have a pilot's license, and thus i have no basis for comparison. Comfort level is relative, and without having the 'or I could be flying the plane' comfort level to compare to, its hard to say if being flown by someone else makes me a little uncomfortable.
Buses - not in ~10 years or so. And its certainly less comfortable than driving myself.
Trains - its been awhile, but at times. Again, I've never driven a train, so comparing comfort levels is hard.
Probably also worth pointing out that planes and trains are remarkably safe, much more so than motor vehicles on streets.
The software that controls autonomous vehicles (call it "AI" since all kinds of software is being called AI now) will have baked into it choices such as whether to save the life of the occupant(s) vs saving the life of a pedestrian, in the circumstance where such a hard choice is presented.
Hacking autonomous vehicles to glitch out or gain control of them could become a high tech method of carrying out hits/assassinations. Certainly will be in the movies, anyway.
Autonomous vehicles are probably decades away, if they will ever really materialize the in the way people think. This hasn't stopped all kinds of ridiculous hype in the last 10 years which proves wrong and then is replaced by new hype, over and over.
"Would you rather be run over by a self-driving car or a car driven by a human being? Assuming a similar vehicle travelling at a similar speed, the choice should hardly matter."
In traffic policy I see people willing to tolerate increased accidents and injuries as a result of more regulation. One injury at an intersection with a pair of stop signs is the fault of city officials not caring, two injuries at the same intersection with all way stop signs must be the fault of the reckless drivers. Similar illogic is used to justify increased injuries due to camera enforcement.
For example, people are more apt to blame defendants who adopt nontraditional medical treatment or investment strategies than those who stick with the tried and true.
Doesn't the whole world run on that principle? For example, where will you find a politician who will take a chance on being right in a conspicuously lonely policy, when he has the option of being wrong in plentiful company?