Channel decoding is a challenging task in communication channels exhibiting memory effects. In this work, we apply the recently proposed decoding paradigm of guessing random additive noise decoding (GRAND) to channels with memory, focusing on linear Gaussian intersymbol interference (ISI) channels. For describing error patterns (EPs), we introduce the concept of error burst to account for the memory effect, and define sequence reliability to characterize the likelihood of EP. Based on sequence reliability, we obtain the optimal GRAND algorithm as a generalization of soft GRAND (SGRAND) for linear Gaussian ISI channels, termed SGRAND-ISI, which is equivalent to the maximum-likelihood (ML) decoding algorithm. We then develop order-reliability-bit (ORB) GRAND algorithms based on SGRAND-ISI, to facilitate implementation. In numerical experiments, our proposed algorithms achieve multiple-dB improvements compared to GRAND algorithms which ignore channel memory, and can often attain performance within 0.1--0.2dB of the ML lower bound. We also compare our proposed algorithms with the recently proposed ORBGRAND-Approximate Independence algorithm for handling channel memory, and observe a performance gain of at least 0.5dB at block error rate of $10^{-3}$, meanwhile incurring a substantially lower computational complexity.
翻译:信道解码在具有记忆效应的通信信道中是一项具有挑战性的任务。在本工作中,我们将最近提出的猜测随机加性噪声解码(GRAND)范式应用于具有记忆的信道,重点关注线性高斯符号间干扰(ISI)信道。为了描述错误模式(EPs),我们引入了错误突发(error burst)的概念以考虑记忆效应,并定义了序列可靠性(sequence reliability)来表征错误模式的可能性。基于序列可靠性,我们获得了最优的GRAND算法,作为软GRAND(SGRAND)在线性高斯ISI信道上的推广,称为SGRAND-ISI,该算法等价于最大似然(ML)解码算法。随后,我们基于SGRAND-ISI开发了顺序-可靠性-比特(ORB)GRAND算法,以利于实现。在数值实验中,我们提出的算法与忽略信道记忆的GRAND算法相比,实现了多个分贝的性能提升,并且通常能够达到距离ML下界0.1-0.2dB以内的性能。我们还比较了我们提出的算法与最近提出的用于处理信道记忆的ORBGRAND-近似独立算法,在误块率为$10^{-3}$时观察到至少0.5dB的性能增益,同时计算复杂度显著降低。