Science & Technology

Applied Philosophy: Who Do We Program This Car To Hit?

The ethics of autonomous vehicles.

|

Maybe we could ask the car what it thinks.
NBC

In the last moments before your car crashes, you have just enough control over the wheel to make a choice: Are you going to collide with the motorcyclist who's wearing a helmet or the one who isn't? The guy wearing the helmet is more likely to survive the crash, so that seems like the less awful option, right?

Now suppose you're programming an autonomous vehicle and you need to decide how it should behave in such a scenario. Presumably it too should hit the guy whose head is protected. But wait: If that's the industry standard, doesn't that give bikers an incentive not to wear a helmet, increasing the likelihood that they'll be injured in some other, more statistically likely accident? You could try to keep the algorithm a secret, of course, but these things have a way of getting out—especially if someone sues your company after an injury. And there are ethical issues with not being transparent, too.

It's like a weird mash-up of the trolley problem with some Aaron Wildavsky–style study of regulation and risk, all wrapped up in one of Lawrence Lessig's arguments about code as law. But it's just one of several ethical dilemmas that the cars' designers face. Patrick Lin explores those questions in an interesting article for Wired. "While human drivers can only react instinctively in a sudden emergency," Lin writes, "a robot car is driven by software, constantly scanning its environment with unblinking sensors and able to perform many calculations before we're even aware of danger. They can make split-second choices to optimize crashes—that is, to minimize harm. But software needs to be programmed, and it is unclear how to do that for the hard cases."

It's inevitable that a lot of this is going to be determined by trial and error, as manufacturers make adjustments and a body of case law emerges. Obviously it's best to work out as much as you can in advance, but if that holds you up another issue intrudes: If autonomous cars are safer than cars controlled by a human driver, wouldn't it save lives to get them on the road sooner rather than later? So what are the ethical implications of an ethically motivated delay? Ow, my head hurts.