No-shift universal trellis-coded quantization (NSUTCQ) is an image-data-compression/decompression algorithm designed to be especially useful in telemedicine. Like some other image-compression/decompression algorithms, this one provides both (1) lossy compression/decompression for transmission of most of the information in an image subject to competing requirements to limit transmission time, transmission bandwidth, and image distortion and (2) lossless (or constrained-loss) compression/decompression for transmitting the residual information (the remainder of the information necessary for reconstruction in full detail) about regions of interest (ROIs) within images. Thus, in telemedicine, a diagnostician could preliminarily view less-detailed versions of images, then select ROIs that appear to be significant and request reconstruction of the fully detailed versions of the ROIs.
The NSUTCQ algorithm is a modified version of a previously developed lossy compression/decompression algorithm called "adaptive wavelet/universal trellis-coded quantization" ("wavelet/UTCQ" for short). The wavelet/UTCQ algorithm begins with the use of a wavelet decomposition to transform gray-scale images into wavelet coefficients. The wavelet decomposition can be followed by an adaptive subblock classification to improve coder performance. Next, the wavelet coefficients in the subblock are quantized. The quantization subalgorithm can be characterized as a highly structured vector quantizer or as a scalar quantizer with memory. The quantizer processes the wavelet coefficients into quantization indices. The quantization process is lossy; that is, it introduces distortions into the wavelet coefficients. The goal in designing a good quantizer is to ensure that the distortions do not seriously degrade the reconstructed imagery. In the wavelet/UTCQ algorithm, the quantization indices are adaptively arithmetically encoded in an eight-state trellis-coded quantization (TCQ) scheme.
The NSUTCQ algorithm differs from the UTCQ algorithm in two major respects:
- Changes in the quantizer, involving codebook structures, probability models, and other mathematical considerations too complex to be described here, reduce the sizes of the quantization steps. Absolute errors should therefore be smaller.
- A subalgorithm that involves a defined error tolerance and binning with adaptive arithmetic encoding of residual values for ROIs has been added. The figure is an example of a reconstructed image with two residual-encoded ROIs.
The UTCQ and the NSUTCQ algorithms both have their places in an image-compression/decompression scheme in that their capabilities are complementary. The NSUTCQ performs better at high bit rates. An examination of bit-rate allocations made by an experimental encoder that implements both algorithms revealed that low-frequency subbands were compressed at high bit rates while high-frequency subbands were compressed at low bit rates. Therefore, it makes sense to use both quantizers, choosing the one that best suits the wavelet subband being quantized.
This work was done by Jim Kasner of Optivision Inc. for Glenn Research Center.
Inquiries concerning rights for the commercial use of this invention should be addressed to
NASA Glenn Research Center
Commercial Technology Office
Attn: Steve Fedor
Mail Stop 4 8
21000 Brookpark Road
Refer to LEW-16667