2012

Algorithm for Compressing Time-Series Data

Next, there is generated a bit-control word, which is to be used during the subsequent decompression process to indicate the locations for insertion of the quantized retained coefficients and for insertion of place holders (zeroes) at locations of coefficients that are not retained. The bit-control word is then encoded by a lossless compression technique; this step can significantly increase the overall compression ratio without introducing additional loss. If there are more data blocks to be processed, then the process as described thus far is repeated for the next block. If there are no more blocks to be processed, the compressed data and their control words are transmitted.

The results obtained by use of the algorithm depend partly on three parameters: the block size (the number of data samples in a block), the aforementioned threshold value, and the aforementioned number of quantization bits. By adjusting the values of these parameters for different types of data, one can obtain usefully large compression ratios with minimal errors. Higher threshold values always result in greater compression ratios at the expense of quality of reconstructed signals. In creasing numbers of quantization bits generally reduces compression ratios but yields reconstructed signals of higher quality. Increasing block sizes yields more-varied results; in general, larger compression ratios are associated with larger blocks because fewer block maxima and minima are stored.

This work was done by S. Edward Hawkins III and Edward Hugo Darlington of Johns Hopkins University Applied Physics Laboratory for Goddard Space Flight Center. For further information, contact the Goddard Innovative Partnerships Office at (301) 286-5810. GSC-14820-1