Researchers leverage AI to accelerate driverless vehicle testing
Work at Mcity 2.0, an NSF-powered facility, could bring autonomous vehicles safely into mainstream use
Advances in autonomous vehicles, or AVs, are bringing driverless cars closer to public use. But the number of drivers in the U.S. that have concerns about the safety of those vehicles jumped from 55% in 2022 to 68% in 2023, according to a survey by the American Automobile Association.
The high costs and time required to test vehicles in a natural setting are a major challenge. Prevailing approaches usually test AVs through a combination of software simulation, closed-track tests and on-road testing. Validating the safety performance of AVs at the level of human drivers will take hundreds of millions of miles of testing — and the number of miles needed in a naturalistic driving environment, or one that reflects real operating conditions, can reach the hundreds of billions.
First-of-its-kind research at Mcity 2.0 — a University of Michigan vehicle testing facility powered by the U.S. National Science Foundation and U.S. Department of Transportation-funded Center for Connected and Automated Transportation, or CCAT — offers insight about solving this problem by using artificial intelligence to train vehicles. The approach could reduce the amount of testing miles required by 99.99%, researchers said.
An intelligent testing environment
The main challenge in testing AVs? Reliance on limited open data sets. This is created by the "curse of rarity," or that the probability of safety-critical events occurring during testing is rare. This means most data collected are not relevant for safety testing. The challenge is compounded by the "curse of dimensionality," an issue in areas such as machine learning when the amount of data collected is so large that it becomes difficult to find relevant testing data.
The researchers addressed this challenge by developing a new approach for AI that uses reinforcement learning with neural networks. The idea is to identify and remove non-safety-critical data and train neural networks for AVs using only the small portion of the data that is safety-critical. This can dramatically reduce variances in the data and enable neural networks to learn and achieve tasks.
For testing, the researchers trained background vehicles through a neural network to learn when to execute a potentially dangerous maneuver. In a simulated environment, vehicles trained by AI can perform perilous maneuvers that force them to make decisions that confront drivers only rarely on the road but that are needed to better train the vehicles.
This produces an AI-based testing environment that can reduce the required testing miles of AVs by multiple orders of magnitude while ensuring unbiased testing.
“The safety-critical events — the accidents or the near misses — are very rare in the real world, and oftentimes AVs have difficulty handling them,” said Henry Liu, director of Mcity and professor of civil engineering at the University of Michigan. Liu is also the director of CCAT.
Researchers mainly focused on two layers: moving objects and road geometry. But the approach could be extended to include parameters from other layers, such as weather conditions.
The results showed evidence that using newly developed "deep dense reinforcement learning" can accelerate the testing process for AVs both in simulation testing and on test tracks. The approach can be applied to complex driving environments, including multiple highways, intersections and roundabouts, which could not be done using previous methods.
Mcity 2.0
This approach was made possible with new testing capabilities at Mcity 2.0. A $5 million NSF grant was used to expand the facility’s original proving ground by integrating the physical test track with a software simulation environment, creating the first cloud-based augmented reality facility for testing AVs. This enables broader participation by providing easier access to top-tier infrastructure for the research community, especially those with less resources from underserved communities.
The Mcity 2.0 augmented reality testbed integrates three components: a physical test facility, a mobility data center that collects and shares near-real-time traffic information from 21 intersections, and an augmented naturalistic diving simulator that blends real and virtual vehicles. Researchers can remotely configure and control the test facility infrastructure with traffic lights, crosswalk buttons, rail-crossing arms and more, and build test scenarios using a web-based graphical user interface.
This findings also open the door for accelerated testing and training with other safety-critical systems, such as medical robots and aerospace systems, researchers said.