An algorithm has been devised to greatly simplify and improve the calibration and the reduction of systematic noise of an imaging interferometer or other similar interferometric instrument. Prior to the advent of this algorithm, such calibration was achieved by means of a conventional "flat-field" correction method that requires uniform illumination of all pixels in the image detector of the instrument. To achieve uniform illumination, it is often necessary to disassemble the instrument to separate the image detector from the instrument optics, taking care to keep all optical and electronic components clean and to restore the original optical alignment upon reassembly.

The Results After Various Stages of Processing are shown for a particular interferogram row of a detector array. The plot shows corresponding spectra of the interferograms after Fourier transformation.

This algorithm makes it unnecessary to illuminate the detector uniformly or to perform difficult and time-consuming laboratory calibration, disassembly, and reassembly procedures. Calibration information can be extracted from ordinary images — even from highly variable scenes. With this algorithm, calibration can be performed with much more flexibility — in the laboratory or in the field. Moreover, the algorithm makes it possible to calibrate the instrument in nearly real time — immediately before or after acquisition of interferometric images — so that one can have some assurance that there has not been enough time for vibrations and other environmental factors to affect the calibration significantly.

Imaging interferometers for which this algorithm was designed acquire a spectral image one spatial line at a time on an image detector that contains a two-dimensional pixel array. In contrast to a conventional scanning interferometer, the optics in this imaging interferometer are fixed. The array is illuminated by two beams of light that interfere coherently, giving rise to an interferogram. The interferograms are recorded, then Fourier-transformed to obtain spectra.

The optical configuration of the instrument is such that each pixel along one dimension represents an increment of position along the spatial line, while each pixel along the perpendicular dimension represents an increment of the path-difference between the two interfering beams at that position. The data product of the interferometer, obtained by putting together the spatial and Fourier-transform spectral information from all spatial lines across a scene, is an image "cube" that comprises a two-dimensional spatial image with an inverse Fourier transform of a spectrum for each spatial pixel.

The present calibration algorithm exploits the special physical and mathematical characteristics of the two-beam interference phenomenon (i.e., constraints and symmetry properties), as rendered by the optics of the instrument onto the detector plane. The spatial and interferogram coordinates at the detector plane are orthogonal and thus separable. Thus in principle, a single frame observation that is uniform along the spatial coordinate (or an average of many frames that approaches such uniformity) will yield fringes of equal inclination and that are parallel or unvarying along the spatial coordinate. The algorithm transforms real, imperfect fringe patterns so that they can be represented as fringes of equal inclination, enabling a very effective extraction of systematic noise for subsequent use in treating data cubes on a frame-by-frame basis.

The algorithm prescribes a series of reversible transformations to the two-dimensional Fourier domain of the pixel-array space. An important part of the algorithm is a row-wise-phase-alignment subalgorithm that is analogous to the formalism used to process asymmetrical interferograms produced by scanning Fourier spectrometers. Phase alignment eliminates the effects of variation in interference-fringe path-difference scales over rows of the detector plane. The resulting signal in Fourier space then represents the spectrum of the row-wise coherently coadded interferograms of an entire frame, is highly localized, and is optimally isolated from noise and systematic errors. The signal is filtered out, then an inverse composite transformation is performed to obtain a pixel-variation frame that is used to treat the image data.

The algorithm does not depend on such instrument parameters as the spatial and spectral ranges and resolutions. Additionally, the algorithm treats bad detector pixels in a self-consistent manner, and aberrations caused by imperfections in the detector and associated optics are accommodated. Image data treated by the algorithm approach the fundamental limit (defined by photon shot noise or random detector noise) of the signal-to-noise capability of the instrument. The benefits obtained from applying the algorithm to typical radiative spectra of the atmosphere are designated as "2nd order flattened" in the figure.

The algorithm has been implemented in prototype software that includes parts in Interactive Data Language (IDL) together with a comprehensive set of routines for processing imaging-interferometer-type data. It should be possible to make a commercially viable software product by translating the IDL code into an efficient computing language and integrating it with a comprehensive data-processing-and-visualization program that includes a user interface.

This work was done by Philip D. Hammer of Ames Research Center. For further information, access the Technical Support Package (TSP) free on-line at www.nasatech.com/tsp  under the Test & Measurement category.

This invention has been patented by NASA (U.S. Patent No. 5,675,513). Inquiries concerning nonexclusive or exclusive license for its commercial development should be addressed to

the Patent Counsel
Ames Research Center
(650) 604-5104.

Refer to ARC-14054.