We enhance coarsely quantized LDPC decoding by reusing computed check node messages from previous iterations. Typically, variable and check nodes generate and replace old messages in every iteration. We show that, under coarse quantization, discarding old messages involves a significant loss of mutual information. The loss is avoided with additional memory, improving performance up to 0.36 dB. We propose a modified information bottleneck algorithm to design node operations taking messages from the previous iteration(s) into account as side information. Finally, we reveal a 2-bit row-layered decoder that can operate within 0.25 dB w.r.t. 32-bit belief propagation.
翻译:我们通过复用先前迭代中计算得到的校验节点信息,改进了粗量化LDPC译码性能。在典型译码过程中,变量节点与校验节点在每次迭代中都会生成新信息并覆盖旧信息。本文证明,在粗量化条件下,丢弃旧信息会导致显著的互信息损失。通过增加存储单元来避免这种损失,可将译码性能提升高达0.36 dB。我们提出改进的信息瓶颈算法,该算法在设计节点运算时将前次迭代信息作为边信息纳入考量。最终,我们展示了一种2比特行分层译码器,其性能与32比特置信传播算法相比差距在0.25 dB以内。