The new technology in this approach combines the subpixel detection information from multiple frames of a sequence to achieve a more sensitive detection result, using only the information found in the images themselves. It is taken as a constraint that the method is automated, robust, and computationally feasible for field networks with constrained computation and data rates. This precludes simply downloading a video stream for pixel-wise co-registration on the ground. It is also important that this method not require precise knowledge of sensor position or direction, because such information is often not available. It is also assumed that the scene in question is approximately planar, which is appropriate for a high-altitude airborne or orbital view.
This approach tracks scene content to estimate camera motion and finds geometric relationships between the images. An initial stage identifies stable image features, or interest points, in consecutive frames, and uses geometric relationships to estimate a “homography” — a transformation mapping between frames. Interest points generally correspond to regions of high information or contrast. Previous work provides a wide range of interest point detectors. In this innovation, SIFT (Scale Invariant Feature Transform) keypoints recovered by a difference of Gaussians (DoG) operator applied at multiple scales are used. A nearestneighbor matching procedure identifies candidate matches between frames. The end result of this first step is a list of candidate interest points and descriptors in each frame.
An important benefit of SIFT detection is that the system permits absolute georeferencing based on image contents alone. The SIFT features alone provide sufficient information to geolocate a hot pixel. This suggests an initial characterization phase where the remote observer transmits high-contrast, SIFT descriptors along with images of the (fire-free) surface. The ground system, with possible human assistance, would determine the SIFT features’ geographic locations.
During regular operations, the system can query the database to find geographic locations of new observations. Any preferred single- or multiple-channel detection rule is applied independently in each frame with a very lenient threshold. Then, the algorithm matches consecutive detections across potentially large displacements, and associates them into tracks, i.e., unique physical events with a precise geographic location, that may appear in multiple frames. Finally, the system considers the entire sequence history of each track to make the final detection decision.
This work was done by David R. Thompson of Caltech and Robert Kremens of Rochester Institute of Technology for NASA’s Jet Propulsion Laboratory. NPO-48129