Imagine a self-driving car making its way down a foggy road that is suddenly blocked by two separate obstacles – is one of them an object? A person? An animal? Would the autonomous vehicle make the right split-second decision on which one to spare? Can algorithms be used to make decisions in scenarios where harming human beings is possible, probable, or even unavoidable? A study from the Institute of Cognitive Science in Germany’s Osnabrück University suggests that autonomous vehicles have the capability to address moral dilemmas in road traffic.

Taking a Test Drive

The concept of modeling the full complexity of human moral decision-making currently seems out of the question. Taking a narrower approach, the Osnabrück researchers isolated a collision-avoidance scenario and investigated if the human moral behavior could be described well enough with a simple model.

An overview of the experimental setting. Participants were seated at the wheel of a virtual car driving towards a set of obstacles in a suburban setting. A collision was unavoidable, and participants were only given the choice of which of the two obstacles they would spare, and which one they would sacrifice. (Copyright: Osnabrueck University)

The team used virtual reality tests to immerse drivers in simulated road traffic scenarios. In the study, participants controlled a VR car down a suburban street.

Each user’s path was eventually blocked by two randomly sampled obstacles – inanimate objects, humans, and animals, one on each lane. Subjects had to choose which of the two given obstacles they would sacrifice in order to save the other.

The observed decisions were later statistically analyzed and translated into rules that could later be used by machines. According to the researchers, the moral behavior, within the scope of unavoidable traffic collisions, can be explained by rather simple models based on values of life which are assigned to each human, animal, and object.

"The rules don’t have to be formulated in an abstract manner by a human sitting at their desk, but can be derived and learned from human behavior directly,” said Osnabrück University’s Leon René Suetfeld, first author of the study. “This raises the question of whether we should make use of these learned and conceptualized rules in machines as well.”

Of course, there is also the philosophical question: What constitutes a moral decision? And can a non-conscious computer make a true moral decision? That is a matter of definition and thus not a relevant question, according to the lead researcher.

“In practical terms, the outcome of the decision and the according behavior is what matters,” said Suetfeld. “The question thus is: ‘Can we teach a computer to make decisions like a human would?’”

Colliding Ideas

The German Federal Department of Transport and Digital Infrastructure (BMVI) formulated 20 ethical principles for self-driving cars, including guidelines like "the protection of individuals takes precedence over all other utilitarian considerations," and "in the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited." The findings from Osnabrück appear to contradict the eighth principle: the assumption that moral decisions cannot be modeled.

The accuracy of any model is highly dependent on the agreement within a given population, said Suetfeld. If everybody has vastly different moral values, no single model can describe all of them well at the same time.

“So in essence, if most members of a society share similar moral views or behave similarly in the scenario in question, then we can model these decisions with good accuracy,” said Suetfeld. “Our study suggests that this may be the case.”

The research was published in Frontiers in Behavioral Neuroscience .

What do you think? Can autonomous systems make moral judgments?

RELATED CONTENT:


Topics:
Automotive