An algorithm that can discover potentially interesting objects in image data has been formulated and implemented in software. The algorithm is intended for applications in which the target objects are mathematically ill-defined and/or not known or specified in advance. Potential applications include finding localized geological features, locating defects in fabrics, and identifying fossils in rock samples. The algorithm, which is based loosely on the human visual system, looks for regions of the image that differ significantly from the local background context. Regions of the image are projected into a subspace by use of multi-orientation, multi-scale Gabor filters. Within this filter-response subspace, deviant areas are identified by use of an adaptive statistical test in which the filter-space description of the region is compared with a mathematical model derived from the local background. Deviant areas are then spatially agglomerated and grouped across scale. In preliminary computational experiments on planetary images collected with various instruments (optical cameras, imaging radar, and ground-based telescopes), the software, without specifically being told what to look for, was able to autonomously rediscover a number of well-known geological features, including impact craters, volcanoes, dunes, and ice geysers.
This program was written by Michael Burl, Charles Fowlkes, and Dominic Lucchetti of Caltech for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp under the Software category.
This software is available for commercial licensing. Please contact Don Hart of the California Institute of Technology at (818) 393-3425. Refer to NPO-21107.
This Brief includes a Technical Support Package (TSP).
Unfortunately the TSP Algorithm for Autonomous Visual Discovery (reference NPO-21107) appears to be missing from our system.
Don't have an account? Sign up here.