The new technology in this approach combines the subpixel detection information from multiple frames of a sequence to achieve a more sensitive detection result, using only the information found in the images themselves. It is taken as a constraint that the method is automated, robust, and computationally feasible for field networks with constrained computation and data rates. This precludes simply downloading a video stream for pixel-wise co-registration on the ground. It is also important that this method not require precise knowledge of sensor position or direction, because such information is often not available. It is also assumed that the scene in question is approximately planar, which is appropriate for a high-altitude airborne or orbital view.
This approach tracks scene content to estimate camera motion and finds geometric relationships between the images. An initial stage identifies stable image features, or interest points, in consecutive frames, and uses geometric relationships to estimate a “homography” — a transformation mapping between frames. Interest points generally correspond to regions of high information or contrast. Previous work provides a wide range of interest point detectors. In this innovation, SIFT (Scale Invariant Feature Transform) keypoints recovered by a difference of Gaussians (DoG) operator applied at multiple scales are used. A nearestneighbor matching procedure identifies candidate matches between frames. The end result of this first step is a list of candidate interest points and descriptors in each frame.
An important benefit of SIFT detection is that the system permits absolute georeferencing based on image contents alone. The SIFT features alone provide sufficient information to geolocate a hot pixel. This suggests an initial characterization phase where the remote observer transmits high-contrast, SIFT descriptors along with images of the (fire-free) surface. The ground system, with possible human assistance, would determine the SIFT features’ geographic locations.
During regular operations, the system can query the database to find geographic locations of new observations. Any preferred single- or multiple-channel detection rule is applied independently in each frame with a very lenient threshold. Then, the algorithm matches consecutive detections across potentially large displacements, and associates them into tracks, i.e., unique physical events with a precise geographic location, that may appear in multiple frames. Finally, the system considers the entire sequence history of each track to make the final detection decision.
This work was done by David R. Thompson of Caltech and Robert Kremens of Rochester Institute of Technology for NASA’s Jet Propulsion Laboratory. NPO-48129
This Brief includes a Technical Support Package (TSP).

Multiple-Frame Detection of Subpixel Targets in Thermal Image Sequences
(reference NPO-48129) is currently available for download from the TSP library.
Don't have an account?
Overview
The document is a Technical Support Package from NASA's Jet Propulsion Laboratory (JPL) detailing a project focused on the "Multiple-Frame Detection of Subpixel Targets in Thermal Image Sequences." This research aims to enhance the detection of small wildfires using thermal infrared imaging technology deployed on satellite constellations, specifically the Iridium Next constellation.
The project investigates the feasibility of a low-data-rate sensor constellation that utilizes onboard image processing to improve wildfire detection rates. The analysis includes simulations of 18-30 sensors in six orbital planes, employing a combination of a 4-micron camera and a Field Programmable Gate Array (FPGA). The design incorporates a Raytheon large-format infrared focal plane array, which is crucial for detecting small fires (approximately 15 meters) with a pixel size of 250 meters, covering a swath of 500 kilometers.
Key findings indicate that the number of sensors directly affects the time between detections, which is critical for timely wildfire response. The project demonstrates that using multiple frames of data significantly enhances detection sensitivity without altering the underlying sensor technology. This advancement in algorithm development is expected to improve the efficiency of orbital wildfire networks, leading to better coverage and reduced costs.
The document also highlights the potential for automatic landmark detection, which can minimize downlink data volumes by transmitting only essential information about high-contrast features and hot pixels. This capability could drastically reduce the bandwidth required for image transmission, facilitating faster image acquisition in bandwidth-constrained environments.
The significance of the results extends beyond immediate wildfire detection applications; they provide a foundational estimate of cost and performance for future satellite deployments. This research positions JPL to engage with potential sponsors for sensor development projects and contributes to the broader goal of improving wildfire management through advanced technology.
In summary, the document outlines a promising approach to wildfire detection using advanced thermal imaging and sensor technology, emphasizing the importance of multiple-frame analysis to enhance detection capabilities and operational efficiency in satellite-based monitoring systems.

