A trainable software system known as JARtool 2.0 has been developed to help scientists find localized objects of interest ("target objects") in image data bases. A human expert implicitly trains the system by using a graphical user interface (see figure) to circle all examples of the target object within a set of images. From the user-provided examples, the system learns an appearance model that can be used to detect the target object in previously unseen images.
JARtool 2.0 is built on top of an image display and graphical user interface program called "SAOtng 1.7," which was developed by the Smithsonian Astrophysics Observatory. JARtool utilizes the basic image labeling and browsing capabilities of SAOtng, but also incorporates components that perform matched filtering, principal components analysis, and supervised classification. These components provide the trainable pattern recognition capability.
In the original application for which it was developed, JARtool has been used to locate small volcanoes in synthetic aperture radar (SAR) images of Venus returned by the Magellan spacecraft. However, the system can be applied to other domains. The user must simply supply a new set of training examples for the new class of target objects; there is little or no need for explicit reprogramming.
This work was done by Michael Burl, Usama Fayyad, Padhraic Smyth, Pietro Perona, Saleem Mukhtar, Maureen Burl, Lars Asker, Jayne Aubele, Larry Crumpler, and Joseph Roden for NASA's Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Mathematics and Information Sciences category, or circle no. 156 on the TSP Order Card in this issue to receive a copy by mail ($5 charge).This software is available for commercial licensing. Please contact Don Hart of the California Institute of Technology at (818) 393-3425. Refer to NPO-20213.
This Brief includes a Technical Support Package (TSP).

System for locating objects of interest in image data bases
(reference NPO20213) is currently available for download from the TSP library.
Don't have an account?
Overview
The document presents an overview of JARtool 2.0, a trainable software system developed by NASA's Jet Propulsion Laboratory (JPL) to assist scientists in locating localized objects of interest within image databases. The system is designed to be user-friendly, allowing human experts to train it by providing examples of target objects through a graphical user interface. This training process enables the software to learn an appearance model that can subsequently be applied to detect these objects in previously unseen images.
JARtool 2.0 employs a three-stage approach to pattern recognition, which includes focus of attention (FOA), feature learning, and classifier learning. This methodology synthesizes an appearance model for the target objects based on the examples provided by the user. The system's innovative integration of well-known techniques, such as matched filtering, principal components analysis, and supervised classification, allows it to effectively identify small-scale objects in image data.
The primary application of JARtool 2.0 has been in the context of analyzing Magellan SAR imagery of Venus, specifically for locating small volcanoes. However, the underlying methods are applicable to a broader range of problems, making the system versatile for various scientific and research applications.
The document also highlights the collaborative effort behind the development of JARtool 2.0, listing several key inventors and contributors, including Lars Askar, Jayne C. Aubele, and others. It emphasizes the system's capability to adapt to different domains without the need for extensive reprogramming, as it relies on user-provided training examples to learn and improve its detection capabilities.
In summary, JARtool 2.0 represents a significant advancement in trainable pattern recognition systems, offering scientists a powerful tool for analyzing image data and discovering localized objects of interest. Its ease of use, adaptability, and innovative approach to learning make it a valuable asset in various research fields, particularly in planetary science and remote sensing.

