Channel decoding is a challenging task in communication channels exhibiting memory effects. In this work, we apply the recently proposed decoding paradigm of guessing random additive noise decoding (GRAND) to channels with memory, focusing on linear Gaussian intersymbol interference (ISI) channels. For describing error patterns (EPs), we introduce the concept of error burst to account for the memory effect, and define sequence reliability to characterize the likelihood of EP. Based on sequence reliability, we obtain the optimal GRAND algorithm as a generalization of soft GRAND (SGRAND) for linear Gaussian ISI channels, termed SGRAND-ISI, which is equivalent to the maximum-likelihood (ML) decoding algorithm. We then develop order-reliability-bit (ORB) GRAND algorithms based on SGRAND-ISI, to facilitate implementation. In numerical experiments, our proposed algorithms achieve multiple-dB improvements compared to GRAND algorithms which ignore channel memory, and can often attain performance within 0.1--0.2dB of the ML lower bound. We also compare our proposed algorithms with the recently proposed ORBGRAND-Approximate Independence algorithm for handling channel memory, and observe a performance gain of at least 0.5dB at block error rate of $10^{-3}$, meanwhile incurring a substantially lower computational complexity.
翻译:在具有记忆效应的通信信道中,信道解码是一项具有挑战性的任务。本文中,我们将最近提出的猜测随机加性噪声解码(GRAND)范式应用于具有记忆的信道,重点关注线性高斯符号间干扰(ISI)信道。为描述错误模式(EPs),我们引入错误突发概念以考虑记忆效应,并定义序列可靠性以表征错误模式的可能性。基于序列可靠性,我们获得了线性高斯ISI信道的最优GRAND算法,作为软GRAND(SGRAND)的推广,称为SGRAND-ISI,该算法等效于最大似然(ML)解码算法。随后,我们基于SGRAND-ISI开发了顺序-可靠性-比特(ORB)GRAND算法以利于实现。在数值实验中,与忽略信道记忆的GRAND算法相比,我们提出的算法实现了多分贝的性能提升,并且通常能达到距离ML下界0.1-0.2dB以内的性能。我们还与最近提出的用于处理信道记忆的ORBGRAND-近似独立算法进行了比较,在误块率为$10^{-3}$时观察到至少0.5dB的性能增益,同时计算复杂度显著降低。