Digital data is subjected to errors when stored or transmitted due to the effects of noise on the medium or communication channel. The current solution is to add redundancy to the data on storage or transmission via an error correction code so that, at the time of retrieval of the data, decoding can undo the noise effect and correctly recover the original data.
Traditional decoding methods require highly specialized, restricted codebooks and corresponding code-specific decoders. A new method developed by Muriel Medard, Research Laboratory in Electronics, MIT, and Ken Duffy of Hamilton Institute, Maynooth University, Ireland, called GRAND (guessing random additive noise decoding) is an approach to decoding that improves on existing methods.
The GRAND method decodes data based on identifying the noise effect, rather than attempting to extract codewords directly from the noise impacted encoded data as existing methods do.
The approach contains three main processes that are repeated until a decoding is obtained. Noise effect sequences are sequentially generated in decreasing order of likelihood either a priori based on a statistical channel model or using soft information, such as can be obtained in reception of communication signals.
GRAND takes a noise effect sequence, inverts its effect from a noise-corrupted signal, and queries whether the resulting data are a valid codeword, using the code merely as a validator or hash.
The first instance of code-word identification is the decoding. By mirroring the noisy channel, GRAND provides optimally accurate maximum likelihood decoding of arbitrary codes with moderate redundancy that can be chosen to perfectly match system characteristics.
GRAND can efficiently and precisely decode all moderate to low redundancy codes.
The technology obviates the need for multiple decoders and is applicable in different industries including AR/VR, low latency gaming, IoT, 5G networks, optical networks, and many others.