Using a stereo camera pair, imagery is acquired and processed through the “JPLV” stereo processing pipeline. From this stereo data, large 3D blobs are found. These blobs are then described and classified by their shape to determine which are vehicles and which are not. Prior vehicle detection algorithms are either targeted to specific domains, such as following lead cars, or are intensity-based methods that involve learning typical vehicle appearances from a large corpus of training data.
In order to detect vehicles, the JPL Vehicle Detection (JVD) algorithm goes through the following steps:
1. Take as input a left disparity image and left rectified image from JPLV stereo.
2. Project the disparity data onto a two-dimensional Cartesian map.
3. Perform some post-processing of the map built in the previous step in order to clean it up.
4. Take the processed map and find peaks. For each peak, grow it out into a map blob. These map blobs represent large, roughly vehicle-sized objects in the scene.
5. Take these map blobs and reject those that do not meet certain criteria. Build descriptors for the ones that remain. Pass these descriptors onto a classifier, which determines if the blob is a vehicle or not.
The probability of detection is the probability that if a vehicle is present in the image, is visible, and un-occluded, then it will be detected by the JVD algorithm. In order to estimate this probability, eight sequences were groundtruthed from the RCTA (Robotics Collaborative Technology Alliances) program, totaling over 4,000 frames with 15 unique vehicles. Since these vehicles were observed at varying ranges, one is able to find the probability of detection as a function of range. At the time of this reporting, the JVD algorithm was tuned to perform best at cars seen from the front, rear, or either side, and perform poorly on vehicles seen from oblique angles.
This work was done by Shane Brennan, Max Bajracharya, Larry H. Matthies, and Andrew B. Howard of Caltech for NASA’s Jet Propulsion Laboratory. NPO-47569