Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression

Higher compression factors can be attained.

A family of schemes has been devised for organizing the output of an algorithm for predictive data compression of hyperspectral imagery so as to allow efficient parallelization in both the compressor and decompressor. In these schemes, the compressor performs a number of iterations, during each of which a portion of the data is compressed via parallel threads operating on independent portions of the data. The general idea is that for each iteration it is predetermined how much compressed data will be produced from each thread.

A simple version of this technique is applicable when the image is divided into “pieces” that are compressed independently. As an example, for a compressor that does not make use of interband correlation, a piece could be defined to be an individual spectral band, or a fixed number of bands. In the technique, the compressed output for a piece is comprised of multiple “chunks.” The concatenated chunks for a given piece form the compressed output for the piece. Most of the compressed image is produced in multiple iterations, where during a given iteration, one chunk is produced for each piece. Prior to the start of an iteration, chunk sizes are calculated for each piece. The chunks can be produced or decompressed in parallel. It is noted that it is not specified how much of the image data will go into a chunk, and in fact a chunk may contain incomplete portions of encoded samples (at the chunk’s start or end). The compressor iterates the process of deciding on chunk sizes and producing chunks for each piece of the requested size, until compression of each piece is almost finished. At that point, the remainder of the pieces is compressed serially without a target chunk size.

Typically, the chunk size calculation should seek to balance the progress through each piece, i.e., to leave equal numbers of samples remaining in each piece; a suggested procedure has this aim. A key requirement on the chunk size calculation is that reasonable chunk sizes must be decided on based only on information from the compressed data available at a given point in the process. Similarly, from previous data, it must be possible to evaluate when to switch from the parallel chunk compression to the serial process that completes compression of each piece.

A more general technique accommodates pieces that are not compressed independently, allowing compressors such as the Fast Lossless (FL) to more fully exploit dependencies between spectral bands, which generally allows a higher compression factor to be achieved.

This work was done by Matthew A. Klimesh and Aaron B. Kiely of Caltech for NASA’s Jet Propulsion Laboratory. NPO-48521

This Brief includes a Technical Support Package (TSP).

Organizing Compression of Hyperspectral Imagery to Allow Efficient Parallel Decompression (reference ) is currently available for download from the TSP library.

Please Login at the top of the page to download.

The U.S. Government does not endorse any commercial product, process, or activity identified on this web site.