A method of smoothing and edge detection in digitized images involves the use of a Gaussian smoothing filter that is adaptive in the sense that the filter scale varies with the estimated distance between each scene point and the camera. The estimate of distance is obtained via stereoscopy. The method was conceived for a developmental image-processing system that would recognize unexploded ordnance on a military test range. The method can also be used to enhance performance in other edge-detection applications.

Older methods of smoothing and edge detection involve, variously, the use of a single scale or a set of scales, without knowledge of which scale is appropriate for each location in an image. In other words, the scale is not related to the sizes of objects in the image, even though the apparent sizes of objects vary widely with distance. As a result, the use of a single scale or an inappropriate set of scales for all image points can result in spurious edge detection or in failure to detect edges of interest.

In the present method, the scale at each point in an image is adjusted to account for the variation of apparent size with distance and is thus related to the real-world size of the object depicted at that point. The scale (σ) at pixel coordinates *x*,*y* is given by σ(*x*,*y*) = *K*/*R*(*x*,*y*), where *K* is a predetermined constant and *R*(*x*,*y*) is the distance computed at *x*,*y*from the disparity between the two images of a stereoscopic pair. The algorithm that computes *R*(*x*,*y*) incorporates the calibration of the stereoscopic camera rig and includes a correction for radial lens distortion. The disparity between the left and right images for each pixel is obtained by minimizing the sum-of-squared-difference (SSD) measure of windows around the pixel in the Laplacian of the image. The coordinates of each pixel are then computed by triangulation. In the case of pixels for which *R*(*x*,*y*) cannot be computed (*e.g.*, where image texture is too low), *R*(*x*,*y*) values are propagated from neighboring pixels by use of a technique that approximates nearest-neighbor search.

The variable scale Gaussian smoothing filter is applied in a window of 2*W*+1 by 2*W*+1 pixels centered at the pixel *x*,*y*. Ideally, the output of the filter would be given by

where *I*(*x*,*y*) is the brightness of the image at *x*,*y*. It turns out to be inefficient to perform this computation exactly, using σ = σ(*x*,*y*) for each pixel. For greater efficiency, the filter output is approximated by first convolving the entire image with a discrete set of Gaussian filters with scales related by factors of 2, then performing a parabolic interpolation to the appropriate scale for each pixel.

Edges are detected by an algorithm that computes gradients in the filtered image. For the purpose of edge detection, gradients must be comparable. However, gradients representing otherwise identical edges are stronger in regions smoothed at smaller values of σ. Therefore, to make gradients comparable, the magnitude of the gradient each pixel *x*,*y* is normalized by multiplying it by σ(*x*,*y*).

The figure shows an original 750-by-500-pixel image along with examples of edge detection, without and with stereo-guided scale selection. At σ=1, edges close to the camera are rough and a number of extraneous edges are detected. As the scale jumps from σ=1 to σ=2 and σ=4, details of most distant objects (the trees and the far end of the railing) are lost. In the case of stereo-guided scale selection, edge-detection performance is high at both close and distant points in the scene.

*This work was done by Clark F. Olson of Caltech for*NASA's Jet Propulsion Laboratory*.**NPO-20475*