Self-Driving Cars Could Be Programmed to Kill You? The Ethics of the Robotic Road
Could your self-driving car kill you? It could if it saves more people in the end. Scientists have taken a closer look at the ethical puzzles that a self-driving car may face in order to find out if the computerized system may cause deadly crashes in order to save more people at the end of the day.
The researchers, in this case, looked at a common ethical puzzle: the Trolley Problem. Imagine that you're in charge of the switch on a trolley track. The express is due any minute, but as you glance down the line you see that a school bus has stalled at the level crossing. At the same time, on the alternate track your child has crawled onto the tracks. Flipping the switch would save the school children on the bus, but kill your own child; there isn't time to save both. What would you do in that situation?
"Ultimately, this problem devolves into a choice between utilitarianism and deontology," said Ameen Barghi, one of the researchers, in a news release.
Google's cars can handle real-world hazards, such as cars suddenly swerving in front of them. In some situations, though, a crash is unavoidable. So how would a Google car be programmed to handle a no-win situation such as a blown tire? A car might have to choose between swerving into oncoming traffic or steering into a retaining wall.
"Rule utilitarianism says that we must always pick the most utilitarian action regardless of the circumstances-so this would make the choice easy for each version of the trolley problem," said Barghi. This means that you count up the individuals involved and go with the option that benefits the majority.
Certainly, this problem is something to consider when designing self-driving cars. What the "right" answer is, though, is something that needs to be debated.
For more great science stories and general news, please visit our sister site, Headlines and Global News (HNGN).