At a busy intersection of cars, pedestrians, and cyclists, anything is possible. New software from the Technical University of Munich (TUM) aims to predict all of those possibilities, so that self-driving vehicles will never get into accidents.
The TUM software module permanently analyzes and predicts events while driving, using vehicle sensor data that is being gathered and evaluated at every millisecond.
The software calculates all possible movements for every traffic participant — provided the cars are adhering to the road traffic regulations.
By understanding each variation in the scenario, the system, in its own way, can sense a maneuver in advance: a car pulling out from a stop sign, for example.
While the system determines a variety of movement options for the vehicle, the program simultaneously calculates potential emergency maneuvers in which the vehicle can be moved out of harm's way (without endangering others) by accelerating or braking. The autonomous vehicle may only follow routes that are free of foreseeable collisions and for which an emergency maneuver option has been identified.
The quick calculations are made possible by simplified dynamic models, which depict a larger number of possible positions, but a subset of positions expected for actual road users. So-called reachability analysis is used to calculate potential future positions a car or a pedestrian might assume.
The formal verification technique guarantees "legal safety," meaning that autonomous vehicles never cause accidents while other participants on the road perform any behavior that follows traffic rules.
"Our technique serves as a safety layer for existing motion planning frameworks that provide intended trajectories for autonomous vehicles," says the team's report, published in Nature Machine Intelligence this month.
The software verifies whether intended trajectories comply with legal safety, and provides fallback solutions in safety-critical situations. Using the software, the autonomous vehicle executes only safe trajectories, even when using an intended trajectory planner unaware of other traffic participants.
To evaluate the system, the computer scientists created a virtual model. The test environment reflected everyday traffic scenarios, based on real data collected during drives through Munich with an autonomous vehicle.
"Using the simulations, we were able to establish that the safety module does not lead to any loss of performance in terms of driving behavior, the predictive calculations are correct, accidents are prevented, and in emergency situations the vehicle is demonstrably brought to a safe stop," said Matthias Althoff, Professor of Cyber-Physical Systems at TUM.
Althoff and two of his fellow researchers from Munich School of Robotics and Machine Intelligence at TUM, Christian Pek and Stefanie Manzinger, responded to questions via email.
In the below Q&A, see why the team believes their system will provide the kind of assurance needed for self-driving cars to truly take the road.
Tech Briefs: So, let’s take the scenario referenced in your press release . Let’s say a self-driving car with your software approaches an intersection. One vehicle jets out, a pedestrian steps into the lane, a cyclist rides by, maybe even a bird flies out into the street. What does your software do to avoid a collision? What kinds of calculations are being made?
TUM: Given the current sensor measurements of the autonomous vehicle, our method predicts all possible legal behaviors of other traffic participants, like the pedestrian and the cyclist. For any intended motion of the autonomous vehicle, our method verifies that this motion is collision-free against the predicted behaviors of other traffic participants and provides a safe fall-back plan for emergency situations.
Tech Briefs: And how is your system improving upon previous ADAS technologies?
TUM: Our software serves as a safety layer for motion planning and verifies whether decisions of the autonomous vehicles are safe during its operation.
Current autonomous driving systems usually incorporate most-likely evolutions of a traffic scenario — for example, the preceding vehicle will most likely accelerate. However, this design might result in unsafe behaviors if traffic participants behave differently than expected — for example, if the preceding vehicle decelerates.
Our algorithm addresses this problem by computing all possible future evolutions of the scenario by considering all motions of other traffic participants that are compliant with traffic rules. As a result, we are able to ensure that decisions are safe regardless of the future legal motion of other traffic participants.
Tech Briefs: You mention the method predicts “all possible legal behaviors.” How does the method do so, and how are you able to say “all” possible future evolutions of a scenario, for example? How are you able to cover the range of possibilities?
TUM: We use a method called reachability analysis to compute all possible legal behaviors of other traffic participants. The reachable set of a traffic participant is the set of states, such as the positions and velocities, that the traffic participant can reach at a specific point in time when starting from an initial set of states considering all admissible actions, like all possible steering angles and accelerations.
We use the resulting reachable sets to predict regions in the environment that are possibly occupied by traffic participants in the future. Since the exact reachable set is difficult to compute, we use simplified motion models for the computation. These simplified models are guaranteed to capture all possible behaviors that a real traffic participant can perform. Thus, safety is not compromised.
Tech Briefs: What role does “Certainty” play in the calculations? Does the vehicle only perform maneuvers that have a high degree of certainty?
TUM: Our premise is that autonomous vehicles only execute motions that have been verified as legally safe. Our algorithm verifies the safety of decisions of the autonomous vehicle during its operation and only allows the execution of provably safe decisions. In emergency situations, our safety layer stops the autonomous vehicle in dedicated safe areas.
Tech Briefs: And you can trust that level of certainty? Do you trust a self-driving car, with this software, to avoid accidents? Do you believe that others will develop a trust in self-driving cars, in the future?
TUM: We believe that autonomous vehicles will only be fully trusted if the safety can be ensured. To ensure safety of autonomous vehicles, different components must be considered, such as perception, motion planning, and control. Our software addresses safety for the motion planning; since we use formal verification, we can show that autonomous vehicles do not cause self-inflicted accidents.
What do you think? Does this system make you more likely to trust a self-driving vehicle? Share your questions and comments below.