An improved algorithm for detecting gray-scale and binary templates in digitized images has been devised. The greatest difference between this algorithm and prior template-detecting algorithms stems from the measure used to determine the quality or degree of match between a template and given portion of an image. This measure is based on a maximum-likelihood formulation of the template- matching problem; this measure, and the matching performance obtained by use of it, are more robust than are those of prior template-matching algorithms, most of which utilize a sum-of-squared-differences measure. Other functions that the algorithm performs along with template matching include subpixel localization, estimation of uncertainty, and optimal selection of features. This algorithm is expected to be useful for detecting templates in digital images in a variety of applications, including recognition of objects, ranging by use of stereoscopic images, and tracking of moving objects or features. (For the purpose of tracking, features or objects recognized in an initial image could be used as templates for matching in subsequent images of the same scene.)

In an Image of Rocky Terrain, 100 by 7-by-7-pixel feature templates were selected as having the lowest uncertainty for tracking. Tracking was then performed in an image acquired after the camera had undergone forward motion. Seventy-two features survived to be tracked after pruning by use of uncertainty and probability-of-failure measures. No false positives remained among the tracked features.
For the sake of computational simplicity, the present version of the algorithm involves two-dimensional edge and intensity templates, the pose space of which is restricted to translations in the image plane; however, it is possible, in principle, to extend the algorithm to more complex cases. The basic image-matching technique used in the algorithm utilizes a prior maximum-likelihood formulation of edge template matching that has been extended to include matching of grayscale templates. In this formulation, one generates a function that assigns a likelihood to each of the possible positions of a template. In an application in which a single instance of the template appears in the image, (e.g., tracking or stereoscopy), one accepts the template position with the highest likelihood if the matching uncertainty is below a specified threshold. In other recognition applications, one accepts all template positions with likelihoods greater than some threshold value.

The search for the template position( s) is performed according a variant of a multiresolution technique that makes it unnecessary to consider all pos- sible template positions explicitly, yet makes it possible to find the best template position(s) in a discretized search space. In this technique, the space of model positions is divided into rectilinear cells and the cells are tested to determine which (if any) contain positions that satisfy a likelihood-based acceptance criterion. The cells that pass the test are divided into subcells, which are examined recursively, and the rest are pruned.

Inasmuch as the likelihood function measures the probability that each position is an instance of the template, error and uncertainty cause the likelihoodfunction peak that corresponds to that position to be spread over some volume of the pose space. Integration of the likelihood function under the peak yields an improved measure of the quality of the peak as a location of the template. Subpixel localization and estimation of uncertainty are performed by fitting the likelihood surface with a parameterized function at the locations of the peaks. In a stereoscopic or tracking application, the probability of failure to detect the correct position of the template is estimated in a procedure that includes a comparison of the integral of the likelihood under the most likely peak to the integral of the likelihood in the remainder of the pose space.

This work was done by Clark F. Olson of Caltech for NASA’s Jet Propulsion Laboratory. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp  under the Information Sciences category. NPO-21026



This Brief includes a Technical Support Package (TSP).
Document cover
Maximum-Likelihood Template Matching

(reference NPO-21026) is currently available for download from the TSP library.

Don't have an account?



Magazine cover
NASA Tech Briefs Magazine

This article first appeared in the February, 2002 issue of NASA Tech Briefs Magazine (Vol. 26 No. 2).

Read more articles from the archives here.


Overview

The document presents a novel algorithm for maximum-likelihood template matching developed by Clark F. Olson at NASA's Jet Propulsion Laboratory. This algorithm significantly improves the detection of gray-scale and binary templates in digital images, addressing limitations found in prior template-matching methods that primarily relied on sum-of-squared-differences measures.

Key features of the algorithm include robust matching performance, subpixel localization, estimation of uncertainty, and optimal selection of features for tracking. The algorithm is particularly useful in applications such as object recognition, stereoscopic ranging, and tracking moving objects. It operates by generating a likelihood function that assigns probabilities to potential template positions within an image, allowing for the identification of the most likely template location based on a specified threshold of matching uncertainty.

The algorithm employs a multi-resolution search technique that efficiently narrows down potential template positions without the need to evaluate every possibility explicitly. This involves dividing the search space into rectilinear cells, testing them for likelihood, and recursively examining promising cells while pruning those that do not meet the acceptance criteria. This approach enhances computational efficiency while maintaining accuracy in template detection.

Additionally, the algorithm incorporates a method for estimating the probability of failure in detecting the correct template position, which is crucial for applications requiring high reliability, such as autonomous vehicle navigation. By integrating the likelihood function under the peak of the most likely position and comparing it to the likelihood in the rest of the pose space, the algorithm provides a measure of confidence in its detections.

The document emphasizes the algorithm's robustness and efficiency, making it suitable for real-time applications in complex environments. The work was motivated by the need to improve feature tracking for visual navigation in autonomous systems, showcasing its potential impact on future technologies.

Overall, this advancement in template matching represents a significant step forward in image processing, with implications for various fields, including robotics, computer vision, and remote sensing. The algorithm's ability to handle uncertainty and optimize feature selection positions it as a valuable tool for researchers and engineers working with digital imagery.