Your Self-Driving Car: Minimize Casualties or Save You?
What algorithms will actually save the most lives in the long run?

Should your self-driving car be programmed to make a utilitarian calculation that sacrifices you to save the lives of more numerous others when an accident is inevitable? University of California, Irvine psychologist Azim Shariff and his colleagues reported the responses of Americans to this question in their study, "The Social Dilemma of Autonomous Vehicles," in Science. First, they find that most people in surveys do want self-driving vehicles to be programmed to minimize casualties even if that means sacrificing the vehicle's passengers. How very utilitarian!
But here's the kicker: Most respondents also say that they would not buy nor would they ride in automobiles run on algorithms that might sacrifice them to save the lives of others. If cars are programmed to make such utilitarian moral choices, that could well slow down their public acceptance which paradoxically would mean even more lives would be lost over time as error-prone humans continue to haplessly navigate highways and streets.
More than two years ago in my article, "The Moral Case for Self-Driving Cars," I cut this supposed Gordian knot of ethical perplexity. I pointed out that such fraught moral dilemmas are rare and will become even rarer with the advent of autonomous vehicles. Recognizing the natural instinct for self-preservation I argued, "Although it may not be the moral thing to do, most drivers will react in ways that they hope will protect themselves and their passengers. So as a first approximation, autonomous vehicles should be programmed to choose actions that aim to protect their occupants."
Since self-driving cars will be safer than human-driven vehicles, the right thing to do is to encourage people to trust them so that their adoption and deployment will proceed as quickly as reasonably possible. Consequently, programming cars to preserve the lives of their passengers is the ethical thing to do.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Most respondents also say that they would not buy nor would they ride in automobiles run on algorithms that might sacrifice them to save the lives of others.
Shocking. So I guess we'll be keeping that aspect of the programming from the consumer. I, on the other hand, want a car programmed to keep its insurance rates as low as possible.
I, on the other hand, want a car programmed to keep its insurance rates as low as possible.
"I'm sorry, Professor Falken Fist of Liable, but the only winning move is not to drive."
I, on the other hand, want a car programmed to keep its insurance rates as low as possible.
I think you're on the right track, but 'It', the car, does not buy the insurance. I'm not sure how to handle this.
I choose my words carefully.
You didn't choose them, Fist of Tricknology. Your AI programme did. For insurance purposes, no doubt.
That's what happens when you leave, "Automatic Updates," on and it D/Ls and installs the Preet Service Pack Update.
Is this a Westworld spoiler?
BTW, this was one of the reasons for the SUV boom. You're safer in your giant land yacht than the idiot in the tiny rice burner you're crashing into, the entire thing being a crumple zone.
But much more likely to roll over doing something dumb.
^This... Likelihood of death or dismemberment about the same.
I think you can meadure the value of life by how willing people are to drive a more dangerous car with better gas mileage as gas prices change.
Which would seem to indicate that people are more interested in being up high and looking cool than safety.
And people overestimate the safety of a lot of SUVs. Many of them don't do very well in crash tests. And plenty of compact cars do very well.
These people believe that their giant car gives them a license not to pay attention. In the last year my poor little Honda Fit has been hit twice on the road by people who just straight up didn't see it: once by a Tahoe and once by a Silverado.
Why should it matter? I've been told over and over again that self-driving vehicles will lead to automotive utopia in which accidents no longer happen ever. Why prepare for an eventuality that will never occur?
Pedestrians.
Jet packs. Problem solved.
Except for those idiot flyers!
Religious folks believe in a Guy-in-the-Sky, who is benevolent if you do what his earthly representatives tell you to do. Progressives believe in government, which is benevolent if they are running it. Technofreaks believe in self-driving cars, which are benevolent because they don't seem to involve human beings.
IME, the first and third groups contain significant elements that would be completely cool with you reading the manual/code, grokking it, and deciding on your own from there.
'Seem' is an important consideration, since as far as I'm aware robots aren't the one's programming the robots. Odd that people seem to trust the Wizard's of Silicon Valley over themselves. I suppose it's true at this point that people are too stupid to realize that a computer or algorithm is only as good as it's programmer, and just because you get 10,000 programmers to agree it's perfect doesn't mean shit.
It's hubris, top to bottom.
I wonder what happens when the people testing these cars realize that it snows for large parts of the year in a lot of places? I'll believe that a car can drive to work in a major snowstorm as effectively as I can when I see it.
OT: anyone in California know anything about Kamals Harris or Loretta Sanchez, and which one might be less awful?
I know enough that I wouldn't endorse either with my vote. Kamala Harris might actually be evil though. So there's that.
Agreed; Clinton/Trump dilemma.
I didn't waste the ink to make a choice.
there's enough significant different between Trump and Clinton to make an intelligent choice.
One is clearly a criminal,has SERIOUS ethical and medical problems,the other is a rogue and wild card.
One will seat leftist SCOTUS judges that will destroy the Constitution,the other is an unknown.
Sanchez is a moron. Like Barbara Boxer level kind of stupid, she used to be my Congresscritter.
Harris is more likely to be another Feinstein so I'd pick stupid over evil. (If I still lived there instead of AZ where at least it only feels like Hell in a physical sense).
Me, too. What part of AZ?
Kamala was the one Obama called the most beautiful AG in the country, and got a lot of flack for (I imagine Michelle distributed her own). She also began a prosecution (I forget over what) which even she admitted was not going anywhere, just because she wanted to send a message.
I voted for the other one because Kamala is more of an incumbent than the other one, and has more political support from the big name incumbents. I doubt there's much practical difference.
I may have to take back who I voted for. If there was a libertarian, that's who I voted for. But this was California's weird best-two-from-the-primary, so there may not have been any other choices. My goal is always to unseat incumbents.
Kamala has been discussed in Reason numerous times. Overzealous prosecutor type. Sanchez is more of a centrist, coming from a congressional district in Orange County.
I would imagine the "solution" to this will be to slow them down to the point that fatal accidents are nearly impossible. It's hard to die or kill someone at golf cart speeds.
As my wife did insurance for hotels and resorts at one point, I would have to disagree with you there.
In fact, golf carts are he largest source of liability for hotels w/ golf courses.
But that's drunk idiots driving them, right? (Speaking as a drunk idiot that has driven a golf cart.)
Of course. Mostly. And occasionally not drunk idiots.
I worked a few summers mowing greens. One bachelor party rolled 3 carts, one after the other. Same corner.
These euphemisms are getting really abstract. And elitist.
Was that at some country club in New York by lake Champlain a few years ago? If so I am sorry. I rolled two carts that weekend.
Vermont. Close enough. They were actually pretty cool the whole time
Number of people killed in golf carts: ________
It kills me to say this... but SUGARFREE'S POINT STILL STANDS!!!1!!
Woman charged in TN golf cart death
Man enters plea in golf cart death
That's 2 for the record. ~50,000 more to go.
You posted a humdinger.
Not quite, Saccharin Man.
More fodder for Ronal'd Bejlij's Technocracy Boner...
Face it, until Ronal'd Bejlij's vision of forced AI tech acceptance comes to fruition, the only means of safe autonomous travel he will endorse is strictly by foot. I'm absolutely shocked that Ron hasn't outright advocated for for AI Government, a la Dr. Theopilus and his cohorts.
Wouldn't that particular anecdote have been prevented by being in a car with doors and restraints?
Or, if Ronal'd would have it, an AI driver preventing the driver from making an error in judgement, so's they could have enjoyed the golf card ride in peace and perfect safety. Sheesh, do you even AI, UCS?
No, I work at the wellspring of Natural Stupidity. Bringing more authentic 100% natural stupid into the world is the purpose of my employers.
So, you work for an OB/GYN, according to Ronal'd Bejlij?
**Ducks bedpan pan thrown at me by Dr. ZG**
No, I work for the State of New York.
Yes, I got what you were saying regarding Bailey and humans.
Yes, I got what you were saying regarding Bailey and humans
I knew you would. We haven't lost you to The Collective, thank Sod.
It's a certainty that the only possible way for such an autonomous car is to make it illegal for humans to drive and a new government agency to oversee the various spy software installed in every vehicle, for your protection* and to improve safety*.
Either Bailey is ignorant of this fact, believes there's some way to implement such a system without mandates and loss of freedom, or he believes in sacrificing freedom in the name of safety.
Sadly, in all three scenarios he's a statist. You would need to be blind and a fool to believe in this pipe dream, much like those idiots that want 'solar roads' made of glass.
* denotes obvious lies.
"Come on, guys... Let's go for a ride!"
How adorable! It's the genial googley eyes that gives every one a false sense of security. No one can resist them like no one can resist kewpie dolls or bobble heads.
It has the sort of face that says "I thirst for the blood of the innocent."
Relevant
https://www.youtube.com/watch?v=7UTaSMYK4gs
Really?
Airplanes have had the capability to do fully automatic flight from take off till landing for over a decade. Yet most airlines have a policy that pilots must hand fly all landings and take offs.
Because automated systems tend to fall apart when one sensor degrades or fails. Humans do not. Automatics fail utterly when they are handed a situation that they were not designed for. Humans struggle but often muddle through.
Until you see airlines fully automating their flights, and getting away with it without a dramatic increase in crashes, you should treat fully automated cars should be seen as a pie in the sky fantasy - like sexbots that look and feel like a 25 year old Ornella Muti..
There will be increasing automation in cars to assist drivers to better and more safely operate the car. And some day that may get to the point where I can get the winnebago on the interstate, put it on the governor, walk into the back and make my wife a nice sandwich. But I'm not holding my breath.
Because automated systems tend to fall apart when one sensor degrades or fails.
Automation engineers have a solution to this, and have for some time. It's called triple-modular-redundancy (TMR). Sensors, aka instruments, are deployed in sets of three. A vote takes place when one of the instruments offers a measurement out of sorts with the other two. Who holds the vote? A TMR computer like a Triconix PLC, which is also TMR.
This is some very expensive shit.
https://en.wikipedia.org/wiki/Triconex
What does it do when all three disagree?
Shuts down the system.
That's going to fail deadly when the second sensor goes mid-trip.
In a self-driving car you're talking a non-negligable percentage of vehicles will have poor maintenance records, being run until the system fails outright. Of course, that happens today already, which is why I know it will continue to happen, even if a significant amount of self-driving vehicles take over the road.
Yeah, the shut down is, in itself, is an engineering problem. There are also fail-safe, hard-wired relays and e-stops to shut the system down in any petrochemical plant, e.g.
Engineers are a modest lot.
have a policy that pilots must hand fly all landings and take offs.
That would be incorrect. Airline policies require each pilot (Captain and First Officer) to fly a minimum number of landings by hand to maintain proficiency. That minimum is well below All.
Because automated systems tend to fall apart when one sensor degrades or fails. Humans do not. Automatics fail utterly when they are handed a situation that they were not designed for. Humans struggle but often muddle through.
You have no fucking idea what you are talking about.
So automatic cars will be just like the cars in Demolition Man?
How many people are suffocated by that safety foam?
That never happened in the movie. Therefore, none.
I found that to be the least credible part of that film.
As somebody who watches Air Crash Investigations like some women watch Dateline, I can say that this isn't true. The least safe component of the airplane when a sensor fails is the pilot. Why? Because they tend to trust their instincts and feelings over the remaining working instruments.
See:
Aeroflot 593 (pilot let his kid into the pilot's seat, kid disengaged the aileron autopilot by yanking the yoke, kid notices the roll in time, but pilot doesn't trust the ADI and becomes preoccupied with the flight path indicator showing a holding pattern. Pilot doesn't begin to correct until the plane was in a 90* bank and ends up stalling it)
United 173 (one of the landing gear indicator lights doesn't come on due to a non-threatening failure of a mechanical component... the plane was still perfectly landable, pilot orders the plane into a holding pattern while the flight crew investigates whether the plane is safe to land, the copilot and flight engineer warn the pilot that they're running out of fuel, the pilot is too focused on the landing gear problem to heed their warnings, the plane runs out of fuel on final approach and slams into a suburb)
(See also Aeroflot Tu-124, Scandinavian Air 933, and Eastern 401 for other pilot-induced crashes due to preoccupation with a landing gear indicator light)
First, they find that most people in surveys do want self-driving vehicles to be programmed to minimize casualties even if that means sacrificing the vehicle's passengers. How very utilitarian!
Most respondents also say that they would not buy nor would they ride in automobiles run on algorithms that might sacrifice them to save the lives of others.
I'd imagine that the differences here result from how the questions are worded entirely making the survey kind of useless. Most people are not altruistic in the least. They are, in fact, selfish, self-absorbed assholes. And litigious assholes, at that.
Count me among them. My car is supposed to protect me when I'm in it.
OTOH
Yeah, call me an asshole but why would I purchase a product that will choose to protect some other asshole's life over mine? No offense, but he who purchases the thing should be the things #1 priority. You'll note that cars don't have airbags on the outside, either.
I, on the other hand, do not want a self driving car at all. ever.
I don't want to own one, don't want to ride in one, and would remove the capability from any vehicle I purchased. (of course, I plan to keep the cars I have for the next 20 years anyway)
LUDDITE!1
Which one?
*hides official Luddite Alliance membership card*
MJBinAL|11.7.16 @ 12:02PM|#
"I, on the other hand, do not want a self driving car at all. ever."
+1.
If the government CAN collect information about you, the government WILL collect information about you.
That's the problem. I don't doubt that the self driving car is solvable as an engineering problem. But I cannot imagine a scenario where they aren't so heavily regulated that you effectively can't own your own car anymore. All maintenance will have to be done by officially approved and licensed people and there will be the means (which as you say will be used) to track anyone's movements.
Of course the market will make accommodations for Luddites and/or the poor. You're gonna need Granny-on-a-fixed-income to switch to an automated driver ASAP but she won't be able to afford the newest Chrysler LeBaron (not to mention all the tuners out there modifying their vehicles beyond spec). So, given appropriate time, aftermarket sensors and fly-by-wire retrofit kits will become abundant. Assuming legislators don't do anything absolutely retarded. The government will snoop and the big 3 will probably cater to it, but there will be ways around it.
The only way such a system works is if they're all working off the same make, year, model, and OS as everything else on the road while being entirely interconnected. I.E. the government will mandate that entire market, 100%. There is no way around that one when you're forcing millions of people to bend to your will for their own good, right?
I, on the other hand, do not want a self driving car at all. ever.
Me neither. Because I know full well whose dumb ass is still going to be legally liable when my "self-driving car" somehow fails to detect the kid or the idiot pedestrian in the middle of the street when he shouldn't be there.
If freedom is your pleasure, a motorcycle or a used sports car might be worth considering. I can't do a motorcycle even though I could afford one. The wife will complain a bit, but I could buy a used car with a stick without much acrimony. Not only do not want to the car to defend me, I don't even want it to even decide what gear I'm in.
You won't be allowed to drive, is the crux of the issue. The only way self driving cars can work, is if humans are prohibited from ever driving again. It's obvious.
Of course, I'm sure the government will give you a fair price for all those cars that don't meet the governments criteria that you'll be forced to get rid of. It's for your own good, after all!
Second Law dilemma, that would default to the First Law considerations. If they would have the AI make a utilitarian choice, imagine the lawsuit that comes up when it makes such a choice based on a faulty positive on outside human beings being harmed. I think thae actual programming here is easier debated than done.
So I took the online quiz for this, and my preferences are 100% on the side of saving pedestrians. You're the idiot in a robot car 1:1 I'm saving the pedestrians at the cost of the driver. I might have shown even a bias beyond just even numbers.
Where the hell are all these libertarians when it comes time to vote?
Advocating for their slice of the Free Shit pie. Duh!
Enjoy the government mandated inspections and maintenance required to assure the authorities that your self driving car is not a menace to the public.
Is this going to be any different from the annual government mandated inspections (that require maintenence to pass) I already have to deal with?
It will at least be much worse in degree. Like the pain in the ass of emissions testing applied to all the systems in the car. And no way will you be allowed to work on your own car.
🙁
It cost me $400 to get a $2 part changed the last time I needed to pass inspection because of where it was in the emissions system.
VW fined heavily; other automakers hardest hit.
Eh, this is an issue that's not really an issue.
First, I think we're a long way way from the realtime human casualty optimization sensor/software unit.
Two, the overall safety increase for everyone would be so great that such an optimization would be occurring at more and more rare events, out in the tails.
For example: let's say we had to come up with some socialized concept of airliner collision casualty optimization.
Would you care deeply and worry about that?
Or would you file it under "shit that will probably never happen"?
The solution is to program the AI to realize it isn't going to be able to make a decision that will save all lives, so it flashes red lights, sets off sirens, and returns control to the human. The human, who was asleep or texting, won't have time to react, but it will still be his/her fault. Thus the AI is mistake-free.
+1 Air Asia flight 8501
"Patch 22.128.89c: Fixes recognition error where pedestrians were being tagged as display manniquins."
"Patch 22.224.66b: Fixes infinite loop in accident avoidance methodology involving two ore more pedestrians"
"Patch 22.309.43r: Fixes comparison error which causes AI to chose accident avoidance methodology resulting in the greatest fatalitites."
"Patchset 23: All new Accident Avoidance methologies introduced."
Lovely!
Tesla Model S with Autopilot engaged recently rear ended a motorcyclist because the system didn't "see" the rider.
"Can Autonomous Cars Detect Motorcycles?"
Cycle World
http://www.cycleworld.com/can-.....otorcycles
I wonder how well it really does seeing children and pets.
I wonder how well it really does seeing children and pets.
It will suck.
'Since self-driving cars will be safer than human-driven vehicles,'????
And somehow this is a given???????
It is once you entirely excise the human variable. Well, until you start thinking about things like deer running into the road or snow and/or rain so heavy the sensors fail but y'know as long as it saves even one life it's worth the excise of liberty from everyone Comrade!
Human sensors, eyes, fail in snow and rain.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
just sayin'
That means the car would protect those outside the car first, those inside second and itself third.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
just sayin'
It's been a long time since I read I, Robot, but wasn't one of the stories about a giant super-computer that subtly took over the world and manipulated all humans?
If an AI car is making a calculation as to whom to let die, shouldn't it err on the side of the person who invested in the AI car, and not on the side of the dumbass who made the unpredictable move?
This is like when I carry a gun. Don't ask me to risk my family to protect you. If you wanted to be protected by a gun, you would have brought one. My goal is simply to escape unharmed. If you get plugged, so be it.
The AI car should er on the side of whatever the person who owns and operates it and is responsible for the consequences of its actions decides it should.
So the car should kill the kindergarten class instead of the occupants? Is the user going to be held responsible?
If I make the choice to have the algorithm sacrifice my life to save pedestrians, that is one thing. If, however, the government makes that choice for me, that is quite another.
I am sorry Ron, but you have no right to someone they must sacrifice their life for the collective good. If you believe that, then you need to stop calling yourself a Libertarian and start writing for Salon.
But you don't have a right to run down people with your car.
Screw utilitarianism. No, seriously.
The next time a utilitarian tries to sway your opinion, just tell him you'd let the kid drown. It pretty much snuffs out utilitarianism in its infancy.
Good answer. To answer to the question of "whether the kid should drown" depends on if the kid is you or yours and if the other kid that is supposed to die to save him is you or your kid. Of course the person asking the question never has any skin in the game and is just asking other people to die for the common good. Utilitarians are real charitable like that. They are always willing to offer other people the honor of dying for the common good but would never take the honor themselves.
Consider this: three cars are on the road, car A with one passenger, car B with two passengers, and car C with three passengers.
Car C has a fatal error, and without intervention will result in the deaths of all passengers.
Both Car A and Car B can sacrifice themselves (and their passengers) in order to save Car C's passengers.
But if both Car A and Car B attempt to do so, then all passengers will die.
So the choices are:
Car A and Car B do nothing, 50% fatality rate as Car C dies a grisly death.
Car A does something and Car B does nothing, 17% fatality rate as Car A dies to save Car C
Car A does nothing and Car B does something, 33% fatality rate as Car B dies to save Car C
Car A and Car B do something, 100% fatality rate as Car A and Car B interfere with each other's heroic actions and make things worse.
Now, that's the "God's eye view" of things. But what do the cars actually know? If Car C is between Car A and Car B, then it might be masking them from each other, so B doesn't know A is there, so to B's knowledge, it's best choice is to sacrifice itself. And for that matter, how do the cars know how many passengers are in another car? And even if they know about all the passengers in all the cars and all the cars are present, how does B know that A's driver will let A sacrifice itself?
Face it, a lot of these moral/ethical dilemmas require complete knowledge to arrive at the "correct" answer. And that's the thing: complete networking between the cars is unlikely to happen. For that to work, there would have to be a standard message protocol/language between automated cars, for them to be able to inform others of their decision to sacrifice, for them to be able to work around human drivers/operators, and so-on.
It's not an unsolvable problem, but I suspect that in the near-term (five to ten years) where we can hope for early autonomous cars to be hitting the road commercially, it won't be solved. Cars will necessarily have to prioritize their own survival/health, because they will lack the complete situational awareness to make the more nuanced decisions.
Face it, a lot of these moral/ethical dilemmas require complete knowledge to arrive at the "correct" answer.
No, not a lot, every ethical dilemma requires complete knowledge or at least the assumption that you have complete knowledge to arrive at a solution. This is why utilitarians are so sinister. They assume you have complete knowledge when in reality you never do.
Cars will necessarily have to prioritize their own survival/health, because they will lack the complete situational awareness to make the more nuanced decisions.
That is very true and will continue forever. I don't care how many protocols you right, you will never have complete knowledge of a situation.
As an aside, once you start writing protocols and having cars talk to each other, you necessarily open them up to hacking. Such cars will therefore only be as safe as their network security. Would you bet your life on the effectiveness of network security? I wouldn't.
I think the NAP requires the car to be programmed to do everything possible to not harm people outside the car. I would never sacrifice others to save myself or my passengers.
Not having to devote another minute to the soul-sucking, brain-wasting chore called driving.
Besides, what's the goal? Safety or freedom?
Like any other Technocratic Pragmatist, which Ronal'd is, desires: Control. Strictly for the betterment of others, of course.
Row row row your gears
Quickly down the lane
Quickly clutch downshift now, drifting like a dream
Eh could be better
I wish I could enjoy it, I really do. But it brings together two things I hate most, paying attention and dealing with other people.
Not having to devote another minute to the soul-sucking, brain-wasting chore called driving.
Uber, Lyft, and tradtional Medallions. Also, Public Transportation. Unless you are like P Brooks, RC Dean, or Suthenboy and live in the middles nowhere with nary a neighbour about.
Need to implement a curfew for the surfs?
Look no more, have i got the product for you!
That is a great sunday. I was calling junkyards for parts. Might take a trip to the famous "yotayard" this week
So, no different than your view WRT, "Having The Sex?"
Pretty much identical, including screaming incoherent epithets the whole time.