Should Self Driving Cars Be Programmed To Kill? Most Say Yes

0
60

As the day when self driving cars can finally be purchased draws closer, so too does the need to create a system of robot ethics for these cars’ algorithms, raising the interesting question: Who should the car attempt to save in the instance of an unavoidable accident?

And how does that answer change depending on how many people are likely to be injured inside (and outside) of the self-driving car?

Jean-Francois Bonnefon from the Toulouse School of Economics in France is working on these questions using experimental ethics.

What is experimental ethics? It a method of finding an answer to ethical dilemmas where the solution is the most popular answer to an ethical question based on the responses of a large group of people.

The reason Bonnefon is pursuing such a democratic approach to the problem is that eventually, someone must buy these cars. With the promise that autonomous vehicles will eventually lead to an overall drop in accidents once they become widespread, it is perhaps forgivable that Bonnefon and others have begun their investigation of the issue in this manner.

This is illustrated by the example of a driver’s autonomous vehicle heading towards an unavoidable collision with a large group of people. Should it possibly kill its own driver by turning into a barrier, or continue on through the crowd?

Bonnefon discovered that the participants in his study would prefer that everyone buy the owner-sacrificing cars, not the one which only seeks to protect its owner.

Bonnefon and others still have much more work on these questions, but he noted the issue’s importance, “As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent.”

Mercedes has been offering driver assistance software capabilities in its models since 2013, but unlike Tesla’s newly released Autopilot, these do not offer a speed maintenance feature where the car can almost drive itself on simple highways.

Tesla spokesperson Khobi Brooklyn attempted to downplay the potential danger that arises from their new software release, which is currently the closest consumer can get to a self driving car, while at the same time making it clear that the car is still its owner’s responsibility.

“Tesla is very clear with what we’re building, features to assist the driver on the road. Similar to the autopilot function in airplanes, drivers need to maintain control and responsibility of their vehicle while enjoying the convenience of Autopilot in Model S.”
Aside from takeoff and landing, however, air travel is relatively free of the dynamics present in city or even interstate driving, where multiple vehicles and obstacles are always within seconds (or less) of a potential collision.

Director at Princeton’s transportation program, Alain Kornhauser, stated that car companies and regulators have not yet developed the best method, or the best time, for a car to alert the driver to take control back from the computer, “You have to show some respect, because you’re driving a lethal weapon.”

So while we’re not there yet the future is fast approaching and with it the question: Should our self driving cars sacrifice one to save many?

Stay Connected