A new model for self-driving cars learns from past failures by spotting them ahead of time, sometimes by up to 7 seconds.

With autonomous vehicles, an unknown or complex driving situation (like a crowded intersection) can cause a disengagement of the self-driving system, either through automatic safety measures or human intervention.

An artificial-intelligence model from the Technical University of Munich (TUM) uses thousands of real-life traffic situations — specifically, recorded disengagement sequences from test drives — as training data to predict future failures.

In order to predict failures as early as possible, the machine-learning approach classifies sequences of sensor data as either failure or success.

If the system spots a new driving situation that the control system was previously unable to handle, for example, the driver will be warned in advance of a possible critical situation.

The TUM-developed safety technology uses sensors and cameras to capture surrounding conditions, such as the steering wheel angle, road conditions, weather, visibility, and speed. The A.I. system, based on a recurrent neural network (RNN) and thousands of real-traffic situations, learns to recognize patterns with the data.

The car itself is treated as a black box, focused only on data input and data output. The system learns introspectively from its own previous mistakes, according to the Munich team.

"The big advantage of our technology: we completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns,” said lead researcher Prof. Eckehard Steinbach , who is also a member of the Board of Directors of the Munich School of Robotics and Machine Intelligence (MSRM)  at TUM,. “In this way, the A.I. discovers potentially critical situations that models may not be capable of recognizing, or have yet to discover."

The system offers a safety function that knows when and where the cars have weaknesses, says Steinbach.

Steinbach and his team's method combines two kinds of sensors. An image-based model learns to detect generally challenging situations, like a busy city street. An additional data-based model detects fast changes immediately before a failure, such as sudden braking or swerving. The outcome of the individual models is fused by averaging the individual failure probabilities.

The BMW Group evaluated the "introspective failure prediction approach" through 14 hours of autonomous driving on public roads, analyzing around 2,500 situations where the driver had to intervene.

According to a study, released in December of 2020 , the late fusion approach allows for predicting failures with better than 85 percent accuracy – up to seven seconds before they occur, and at a false positive rate of 20%.

In a short interview with Tech Briefs below, Steinbach talks about the strengths of a black-box approach, as well as the limitations of today's vehicle-safety measures.

Tech Briefs: I think this is an interesting idea: “We completely ignore what the car thinks. Instead we limit ourselves to the data based on what actually happens and look for patterns.” What are some examples of patterns that a model may not recognize?

Prof. Eckehard Steinbach: In our work, we look at the state of the car, such as braking and steering, as well as at the camera images the car obtains to detect patterns that lead to disengagements. While this allows our model to detect a large percentage of situations where a human has to take over, not all information about a driving scene is captured in this data.

As a simple example, a pattern of repeated braking might be regular driving in warm weather, but might indicate an impending disengagement if the roads are icy and slippery. If the camera images do not capture this information about the environment, that pattern can not be used to distinguish between regular and disrupted driving. While the camera information usually is enough to assess the conditions of the road, such patterns can still be hard to recognize.

Tech Briefs: Why is it an advantage to “ignore what the car thinks?"

Prof. Eckehard Steinbach: If the car assesses a situation completely correctly, there would be no need for the driver to intervene. However, overconfidence is a significant challenge of many models used in autonomous driving. By recording and learning from those situations, we can learn to detect if a new situation is problematic even if the car is overconfident about it.

Additionally, observing pattern sequences about the state and surroundings of the car allows our model to effectively extrapolate into the future to predict disengagements up to seven seconds ahead. This early in advance, the car's assessment of the scene might still be completely correct, meaning it could not be used to predict the challenging scenario. The raw collected data, on the other hand, might already contain patterns that have led to failures before and therefore allow to predict disengagements in advance.

Tech Briefs: How is the system able to determine a “critical” scenario, seven seconds in advance? Also, when that detection occurs, what happens next? What does the driver see in the car, and what does the car do?

Prof. Eckehard Steinbach: The key is to observe sequences of data and look for temporal patterns. By considering the past three seconds of recorded data, our model is able to detect patterns that eventually evolve into a scenario where the human driver has to take over control. If you know what to look for, you can spot first signs of a challenging situations many seconds ahead.

Our method achieves this around 85% of the time seven seconds in advance. The remaining 15% of the situations can be explained by the fact that some challenging scenarios develop in a very short time, such as pedestrians suddenly emerging from between parked cars and approaching the road. When the detection occurs, the driver needs to be alerted.

Tech Briefs: How is the driver alerted?

Prof. Eckehard Steinbach: The implementation of this alert depends on the specific choice of the human-machine interface, but the driver needs to know that their control of the car will be required within the next seven seconds. This time also allows for the car to plan a safe stopping maneuver in case the human driver does not react to the prompt.

Tech Briefs: How did your test-drive go? What was the most impressive detection you saw?

Prof. Eckehard Steinbach: Since the test drives were performed by the BMW Group, I did not participate in them inside the car. Our group later worked with the recordings of the drives. The most impressive element of the detection system is how early the prediction often occurs. At the time of the detection, the driving scenario can still seem regular, for example, only for the traffic at the next intersection to turn into a complicated, crowded environment where the human took over to ensure safety a few seconds later.

Tech Briefs: What is still challenging for self-driving cars to detect?

Prof. Eckehard Steinbach: One important challenge in autonomous driving is novel or out-of-distribution data. If the car enters a situation that it has not been trained for or sees an object it does not know, problems can arise. Such novel scenes cause human intervention, which leads to those scenes being used as training data for our approach. While our method can then help to detect such a new challenging environment the next time it is encountered, detecting and correctly managing an entirely novel scene the first time it is encountered remains a challenging task.

What do you think? Share your questions and comments below.