FailureNet: Detecting and Preventing AV Failures

Intelligent intersection managers can improve safety by detecting dangerous drivers or failure modes in autonomous vehicles, warning oncoming vehicles as they approach an intersection. Enter FailureNet — a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city. It accurately identifies control failures, upstream perception errors, and speeding drivers; the network is trained and deployed with autonomous vehicles in the MiniCity. FailureNet yields upwards of 84 percent accuracy when deployed on hardware.



Transcript

00:00:01 (bright music) - Our goal is really to be able to detect reckless and dangerous drivers before they get to the intersection. And to do that, we use recurrent neural networks to actually deploy and train our networks in a scaled autonomous vehicle platform called the MiniCity. We use real hardware but scaled to a 1/10 scale size

00:00:20 to deploy all of our autonomy and to be able to detect when there's a dangerous driver at an intersection. Typically, there's two paradigms, you either have full scale vehicles, which are pretty dangerous and expensive to actually deploy your algorithms or you do it purely in simulation, which is probably the most popular these days in the industry.

00:00:38 We're using a scaled physical platform so we can actually really see how these work in the real world interacting with real hardware, sensors, delays, the things you actually see when you deploy these autonomous vehicles on the road. Safety in general is a really hard thing to monitor but what's novel about our approach is that we're actually using the traffic lights to observe drivers on the road.

00:00:59 And in doing so, we try and detect from outside of the car whether there's a failure occurring and actually warn other vehicles ahead of time. We looked at four different types of failures that we actually deploy on our scaled hardware. The first one is just random noise that we add and inject into the low level steering and speed of the vehicle. And this is to simulate just a random error that happens in your autonomy stack. Second one is a failure in the perception of the vehicle

00:01:25 so how it detects the scenes around it. The third one, we have a speeding driver, so this is to simulate a reckless human driver or just, again, another failure in the autonomy stack. And then, finally, we actually brought human drivers into the MiniCity so we can actually test out with real human driving to show what a reckless driver might look like. We wanna actually look over time, can we see the pattern

00:01:46 and really the causal nature of whether there's a failure occurring in the driver. So RNN lets us really pinpoint, what is the state of this driver, and using both the current driving style of the driver of the vehicle, but also the history of the trajectory we've seen in the past of the vehicle. Each of the vehicles has the hardware you would find on a full scale car,

00:02:06 so we have lidar, we have cameras, we have IMUs, and we also have the onboard compute. So we have a computer onboard and it's ingesting all that information with the software we've developed so that they could drive autonomously and we can put new neural networks, new algorithms on our cars to test them. We go from developing software on our computers, put them on the computers, on the cars themselves,

00:02:27 they collect sensor data, and then it comes up with a decision on the actual speed that the wheels of the cars can turn and actually the angle on the steering of the cars. And kind of the dream is we would have these intelligent intersections and traffic lights that actually could be a safety net for if there's some type of failure that occurs in your car with the driver or even with your software.

00:02:47 We're always going to have failures, and if we can use the AI to actually predict these types of collisions before they happen, hopefully, we can have much, much safer roads for us to drive on.