2011

Method and System for Temporal Filtering in Video Compression Systems

This filtering improvement increases efficiency for visual signal components for low-power applications.

Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector.

For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits.

The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

This work was done by Ligang Lu, Drake He, Ashish Jagmohan, and Vadim Sheinin of IBM for Stennis Space Center.

Inquiries concerning rights for the commercial use of this invention should be addressed to:

IBM
1101 Kitchawan Road
Yorktown Heights, NY 10598
Telephone No. (914) 945-3114
E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it.

Refer to SSC-00291/309/310, volume and number of this NASA Tech Briefs issue, and the page number.

White Papers

Putting FPGAs to Work in Software Radio Systems
Sponsored by Pentek
Maintenance Free Linear Guides
Sponsored by IKO
Spherical Plain Bearing
Sponsored by AST Bearings
Tubing & Hose Buying Tips, Part 2
Sponsored by Newage Industries
Avionics Reliability – Thermal Design Considerations
Sponsored by Mentor Graphics
Identification of Circuit Card Assembly Contamination Using NIR Spectroscopy
Sponsored by Ocean Optics

White Papers Sponsored By: