This software addresses the problem of an autonomous vehicle patrolling a region for objects of interest using multiple cameras. The system must identify and track the objects over time and localize their positions in the world. It implements an autonomous perception and situation awareness system, which receives images from an omnidirectional camera head, identifies objects of interest in these images, and probabilistically tracks the objects’ presences over time, even as they may exist outside of sensor range.

The software consists of three functional parts. The Image Server records the images from all cameras on the camera head simultaneously. It also records the data from the inertial navigation system (INS) that provides the vehicle pose at the image frame. Next, the Image Server stabilizes the images to produce images with horizontal, image-centered horizons. Finally, in addition to passing on the images and pose to the next system for further processing, the Image Server efficiently logs this large amount of data for offline use.

The Contact Server detects objects of interest in the stabilized images and calculates the absolute bearing of each contact. Two object-detection algorithms are employed, loosely trained on templates of the 3D objects of interest. Additionally, the Contact Server adjusts the images for better detection results; for example, adjusting for variable lighting conditions.

The Object-level Tracking and Change Detection (OTCD) Server integrates the information from the Contact Sever over time and over all cameras to probabilistically track the objects of interest and to detect changes of interest. OTCD maintains a database of hypothesized true objects, along with a probability of existence that measures the confidence that the object exists at its hypothesized location at a given time. Finally, OTCD can send downstream alerts when a new object appears or a known object disappears.

The software detects objects of interest from arbitrary perspectives and widely varied lighting conditions, and localizes and tracks these objects over time. It tracks the existence of these objects in their estimated positions, even when they are out of sensor range, enabling a patrol vehicle to leave and come back and hypothesize whether the same objects of interest are in the same position viewed earlier (change detection). The software integrates information for multiple source cameras to provide a 360° view of the environment, and tracks objects seamlessly as they transition between cameras. It includes several helpful analysis utilities, such as a realtime remote viewer of the scene and identified objects, efficient multi-leveled data logging, and the capability to process data logs offline for replays.

This work was done by Michael Wolf, Yoshiaki Kuwata, Christopher Assad, Robert D. Steele, Terrance L. Huntsberger, David Q. Zhu, Andrew Howard, and Lucas Scharenbroich of Caltech for NASA’s Jet Propulsion Laboratory. For more information, contact This email address is being protected from spambots. You need JavaScript enabled to view it..

This software is available for commercial licensing. Please contact Dan Broderick at This email address is being protected from spambots. You need JavaScript enabled to view it.. NPO-47850

NASA Tech Briefs Magazine

This article first appeared in the March, 2014 issue of NASA Tech Briefs Magazine.

Read more articles from this issue here.

Read more articles from the archives here.