A program of research on the use of wavelets for compression of data in a parallel-computing environment has led to development of a scheme for compressing image data. The purpose of the research was to determine whether one could achieve an acceptably high compression ratio with acceptably small loss of image data, at a speed adequate for a given real-time application, provided that one could afford to buy and use any number of modern, high-performance computers in parallel and pipeline processing.

The scheme involves a three-stage pipeline procedure and a "toolkit" of alternative compression methods from which one can choose in customizing the processing for a given application. In the first stage in the pipeline, no compression takes place; instead, the data are processed through filters defined by the user to decompose the data into subbands (e.g., frequency or wavelet subbands) in preparation for the subsequent stages.

This Plot of Data From a Computational Experiment on a test image illustrates the superiority of the wavelet-decomposition/vector-quantization version of the present scheme over the JPEG scheme in terms of the L-2 metric, which is the sum of squares of errors between original and reconstructed (final decoded) versions of the pixels in the image.

In the second stage, the data in each subband are compressed by use of vector quantization. As in any quantization method, some information is lost. Because vector quantization is computationally demanding, it is accomplished by use of multiple high-performance computers in a parallel-processing, message-passing architecture.

In the third stage, compression is effected by a method of entropy-based encoding. The encoding in this stage is lossless and can result in doubling of the compression ratio with little or no increase in computational complexity.

Computational experiments were performed to test two versions of the present scheme in comparison with each other and with the Joint Photographic Experts Group (JPEG) scheme, which is a lossy scheme particularly useful for compression of color image data with little apparent image degradation as perceived by the human eye. One version of the present scheme included vector quantization with subband (wavelet) decomposition; the other version included vector quantization without subband decomposition. The primary findings from the experiments are that (1) vector quantization is the major source of compression and (2) by use of wavelet-based subband decomposition, one can increase the compression ratio, albeit with a concomitant increase in the error rate. The performance of the present scheme was found to be superior or at least equal to that of the JPEG scheme in the test cases (see figure).

This work was done by Harry Berryman, James Navem, Jr., and Gary Davison of Ronin Systems, Inc., and Manos Papaefthymiou for Lewis Research Center. For further information, access the Technical Support Package (TSP) free on-line at www.techbriefs.com under the Mathematics and Information Sciences category, or circle no. 113 on the TSP Order Card in this issue to receive a copy by mail ($5 charge).Inquiries concerning rights for the commercial use of this invention should be addressed to NASA Lewis Research Center, Commercial Technology Office, Attn: Tech Brief Patent Status, Mail Stop 7-3, 21000 Brookpark Road, Cleveland, Ohio 44135. Refer to LEW-16372.


NASA Tech Briefs Magazine

This article first appeared in the April, 1998 issue of NASA Tech Briefs Magazine.

Read more articles from the archives here.