A new virtual testing environment breaks the ‘curse of rarity’ for autonomous vehicle emergency decision-making.
The push toward truly autonomous vehicles has been hindered by the cost and time associated with safety testing, but a new system developed at the University of Michigan shows that artificial intelligence can reduce the testing miles required by 99.99 percent.
It could kick off a paradigm shift that enables manufacturers to more quickly verify whether their autonomous vehicle technology can save lives and reduce crashes. In a simulated environment, vehicles trained by artificial intelligence perform perilous maneuvers, forcing the AV to make decisions that confront drivers only rarely on the road but are needed to better train the vehicles.
To repeatedly encounter those kinds of situations for data collection, real world test vehicles would need to drive for hundreds of millions to hundreds of billions of miles.
“The safety critical events — the accidents, or the near misses — are very rare in the real world, and often times AVs have difficulty handling them,” said Henry Liu, U-M professor of civil engineering and director of both Mcity, a public-private transportation and mobility research partnership led by U-M, and the Center for Connected and Automated Transportation, a regional transportation research center funded by the U.S. Department of Transportation.
U-M researchers refer to the problem as the “curse of rarity,” and they’re tackling it by learning from real-world traffic data that contains rare safety-critical events. Testing conducted on test tracks mimicking urban as well as highway driving showed that the AI-trained virtual vehicles can accelerate the testing process by thousands of times. The study appears on the cover of Nature.
“The AV test vehicles we’re using are real, but we’ve created a mixed reality testing environment. The background vehicles are virtual, which allows us to train them to create challenging scenarios that only happen rarely on the road,” Liu said.
U-M’s team used an approach to train the background vehicles that strips away nonsafety-critical information from the driving data used in the simulation. Basically, it gets rid of the long spans when other drivers and pedestrians behave in responsible, expected ways — but preserves dangerous moments that demand action, such as another driver running a red light.
By using only safety-critical data to train the neural networks that make maneuver decisions, test vehicles can encounter more of those rare events in a shorter amount of time, making testing much cheaper.
“Dense reinforcement learning will unlock the potential of AI for validating the intelligence of safety-critical autonomous systems such as AVs, medical robotics, and aerospace systems,” said Shuo Feng, assistant professor in the Department of Automation at Tsinghua University and former assistant research scientist at the U-M Transportation Research Institute.
“It also opens the door for accelerated training of safety-critical autonomous systems by leveraging AI-based testing agents, which may create a symbiotic relationship between testing and training, accelerating both fields.”
And it’s clear that training, along with the time and expense involved, is an impediment. An October Bloomberg article stated that although robotaxi leader Waymo’s vehicles had driven 20 million miles over the previous decade, far more data was needed.
“That means,” the author wrote, “its cars would have to drive an additional 25 times that total before we’d be able to say, with even a vague sense of certainty, that they cause fewer deaths than bus drivers.”
Testing was conducted at Mcity’s test facility in Ann Arbor, as well as the highway test track at the American Center for Mobility in Ypsilanti.
“We have billions of years of evolution that back us in performing the daily driving task,” said Greg McGuire, managing director of Mcity. “Teaching computers how to replace us in this task has proven to be quite complicated.”
Launched in 2015, the Mcity Test Facility was the world’s first purpose-built test environment for connected and autonomous vehicles. With new support from the National Science Foundation, outside researchers will soon be able to run remote, mixed-reality tests using both the simulation and physical test track, similar to those reported in this study.
Real-world data sets that support Mcity simulations are collected from smart intersections in Ann Arbor and Detroit, with more intersections to be equipped. Each intersection is fitted with privacy-preserving sensors to capture and categorize each road user, identifying its speed and direction.