Why An Autonomous Car Might Have To Kill You On Purpose

Researchers have been looking into an ethical dilemma presented by autonomous vehicles, whereby the vehicle might have to sacrifice its passenger for 'the greater good'
Why An Autonomous Car Might Have To Kill You On Purpose

Like it or not (and I suspect most of you will be in the not camp), autonomous cars are coming. But while we’re all busy wondering where this leaves the world of car fans - whether we’ll be banned from driving and thus be forced to rise to power and annex Wales as a petrolhead playground - researchers have been looking at potential ethical dilemmas that are brought up by self-driving vehicles.

As we know, due to the removal of pesky human error, autonomous vehicles are set to be safer than regular cars, but that doesn’t make an accident impossible. Some incidents are simply unavoidable. Imagine the following scenario: your little Google Car is suddenly presented with a group of pedestrians in the road. It’s too late to stop, so the car’s ‘brain’ is presented with two choices: run into the crowd and kill 10+ pedestrians, or take evasive action which would result in a crash, killing the occupant(s) of the car.

So, what will the autonomous car do in such a situation? It’d have to go for the crashy option and kill you. "Some situations will require AVs to choose the lesser of two evils," states a fascinating paper led by Jean-François Bonnefon of the Toulouse School of Economics, where these kinds of dilemmas are discussed in depth.

Looks innocent enough, but it might one day be forced to kill you...
Looks innocent enough, but it might one day be forced to kill you...

It’s all about constructing "moral algorithms", which will inevitably be rather difficult to set, "we argue to achieve these objectives, manufacturers and regulators will need psychologists to apply the methods of experimental ethics to situations involving AVs and unavoidable harm," the paper states.

The researchers looked at a huge variety of issues, for instance: if the car can avoid hitting a motorcyclist by crashing into something, should it? After all, the person in the car will be better protected and less likely to be seriously injured or killed.

There’s also the problem of the public being dissuaded from buying self-driving cars in the knowledge they’re at risk of being offed for the greater good. In that case, a system designed to save lives could - ironically - be making the roads more dangerous by putting buyers off purchasing comparatively safer autonomous cars. Another complication could arise if the moral algorithm varies from car to car. "If a manufacturer offers different versions of its moral algorithm, and a buyer knowingly chose one of them, is the buyer to blame for the harmful consequences of the algorithm’s decisions?"

All of this serves as another reminder - like last year’s FBI report about the potential dangers of autonomous cars - that self-driving vehicles bring with them a dizzying array of questions and challenges that need to be solved before they take to the roads en-masse. Regardless of what we think of self-driving vehicles as petrolheads, it’s fascinating to see the debate play out.

Comments

No comments found.

Topics

Sponsored Posts