The Viewing Imager/Gimballed Instrumentation Laboratory and Analog Neural Three-Dimensional Processing Experiment (VIGILANTE) is a "smart" optoelectronic sensor system that features ultrafast processing of image information for recognition and tracking of targets. VIGILANTE serves as a test bed for generic automatic-target-recognition (ATR) applications, with emphasis on demonstrating ATR capabilities for military defense against cruise missiles. Other applications for sensor systems derived from VIGILANTE could include medical imaging and machine vision for industrial robots and robotic vehicles.
VIGILANTE comprises two main subsystems (see figure). The VIGIL subsystem is an airborne telescope used to acquire image data for target-recognition experiments and to test novel passive and active focal-plane image sensors. The telescope will ultimately include a 15-cm Cassegrain unit, a gimballed mirror, and optical and electronic channels for multiband (infrared, visible, and ultraviolet) image sensors.
The ANTE subsystem is a prototype image-processing/target-recognition analog/digital computer system. The core computing engine in this system is a three-dimensional artificial neural network (3DANN) of a type described in "Neural-Network Modules for High-Speed Image Processing" (NPO-19881), NASA Tech Briefs, Vol. 21, No. 10 (October 1997), page 26. A 3DANN is a low-power-consumption digital/analog integrated-circuit module, about the size of a sugar cube, that can process data at a rate as high as 1012 operations per second. The integrated-circuit stack of a previous 3DANN was mated to an array of infrared sensors. The 3DANN in ANTE is a modified version of the previous 3DANN, denoted "3DANN-M." The modifications enable VIGILANTE to accept data from an image sensor of arbitrary size and format. More importantly, the 3DANN-M can be used to perform general convolution operations on image kernels as large as 64 x 64 pixels.
VIGILANTE is designed to make the most of whatever imagery is presented, whether that imagery be monochromatic, multispectral, still, or moving. For this purpose, the VIGILANTE processing architecture is modeled after the image-processing architecture of the human eye and brain. The VIGILANTE image-recognition process is divided into four stages: collection of images from sensors, generation of synthetic images that augment raw images with additional information, fusion of all images, and semantic interpretation of fused images. The use of synthetic images is consistent with the hypothesis that the brain uses synthetic imagery to analyze scenes by comparing corresponding pixels among images of various types. This hypothesis is equivalent to a "rich pixel" concept, according to which the brain becomes a data-fusion machine at the pixel level, before it analyzes the entire scene in a semantic way.
By breaking complex image-recognition tasks into a series of regular operations, the VIGILANTE processing architecture maps image-recognition functions to a relatively small set of special-purpose electronic processing units that can implement a variety of algorithms. In particular, the special-purpose processing unit for generation of synthetic images (by such processes as spatial filtering, detection of motion, and identification of corresponding pixels in related images) is the 3DANN-M convolution device. Pixel-level fusion can be formed on such parallel-processing devices as single-instruction/multiple-data (SIMD) arrays. Relative to other functions, semantic analysis seldom presents a significant computational bottleneck and can ordinarily be performed by general-purpose computing hardware.
This work was done by Suraphol Udomkesmalee, Curtis Padgett, Wai-Chi Fang, and Steven Suddarth of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Electronic Systems category. NPO-20357
This Brief includes a Technical Support Package (TSP).
"Smart" optoelectronic sensor system for recognizing targets
(reference NPO20357) is currently available for download from the TSP library.
Don't have an account? Sign up here.