A lossless image-data-compression algorithm intended specifically for application to classification-map data is based on prediction, context modeling, and entropy coding. The algorithm was formulated, in consideration of the differences between classification maps and ordinary images of natural scenes, so as to be capable of compressing classification-map data more effectively than do general-purpose image-data-compression algorithms.

This False-Color Image and Classification Map were derived from image data acquired by an airborne visible/infrared imaging spectrometer (AVIRIS) over Moffett Field, California. The classification map is typical of images meant to be processed by use of the present algorithm.
Classification maps are typically generated from remote-sensing images acquired by instruments aboard aircraft (see figure) and spacecraft. A classification map is a synthetic image that summarizes information derived from one or more original remote-sensing image(s) of a scene. The value assigned to each pixel in such a map is the index of a class that represents some type of content deduced from the original image data — for example, a type of vegetation, a mineral, or a body of water — at the corresponding location in the scene. When classification maps are generated onboard the aircraft or spacecraft, it is desirable to compress the classification-map data in order to reduce the volume of data that must be transmitted to a ground station.

Unlike ordinary (continuous-tone) images, a classification map typically contains a relatively small number of pixel values. Also, unlike in continuous-tone images, numerically close pixel values do not necessarily represent similar content. These properties make the problem of compressing classification-map-data differ from the problem of compressing data from ordinary images.

Prediction is commonly used in lossless-compression schemes. In predictive compression, pixels or other samples are encoded sequentially on the basis of a probability distribution estimated from previously encoded samples. Context modeling is often used in conjunction with predictive compression. In context modeling, each pixel or other sample to be encoded is classified into one of several contexts based on previously encoded samples. A context-modeling algorithm maintains separate statistics for each context and uses these statistics to estimate and encode samples more effectively. Ideally, contexts are defined so that different contexts contain sets of pixels or other samples characterized by substantially different statistics.

The present algorithm incorporates a simple adaptive context modeler that feeds into a binary interleaved entropy coder. The algorithm operates on the pixels of a classification map or other image in raster scan order. A sequence of binary decision bits is produced for each pixel to indicate which, if any, neighboring pixel(s) it matches. The encoder maintains probability-of-zero estimates for these bits for each of the contexts. The interleaved entropy coder is bit-wise adaptable, enabling the context modeler to quickly adapt to changing statistics in the image.

In tests, the present algorithm and three prior general-purpose image-data-compression algorithms were applied to five classification maps containing from 4 to 32 different classes. The four-class map is shown in the figure. The results of the tests showed that the volumes of data generated by the present algorithm ranged from 15 to 40 percent below those of the prior algorithms.

This work was done by Hua Xie and Matthew Klimesh of Caltech for NASA’s Jet Propulsion Laboratory.

The software used in this innovation is available for commercial licensing. Please contact Karina Edmonds of the California Institute of Technology at (626) 395-2322. Refer to NPO-45103.