Quantum Error Correction (QEC) decoding faces a fundamental accuracy-efficiency tradeoff. Classical methods like Minimum Weight Perfect Matching (MWPM) exhibit variable performance across noise models and suffer from polynomial complexity, while tensor network decoders achieve high accuracy but at prohibitively high computational cost. Recent neural decoders reduce complexity but lack the accuracy needed to compete with computationally expensive classical methods. We introduce SAQ-Decoder, a unified framework combining transformer-based learning with constraint aware post-processing that achieves both near Maximum Likelihood (ML) accuracy and linear computational scalability with respect to the syndrome size. Our approach combines a dual-stream transformer architecture that processes syndromes and logical information with asymmetric attention patterns, and a novel differentiable logical loss that directly optimizes Logical Error Rates (LER) through smooth approximations over finite fields. SAQ-Decoder achieves near-optimal performance, with error thresholds of 10.99% (independent noise) and 18.6% (depolarizing noise) on toric codes that approach the ML bounds of 11.0% and 18.9% while outperforming existing neural and classical baselines in accuracy, complexity, and parameter efficiency. Our findings establish that learned decoders can simultaneously achieve competitive decoding accuracy and computational efficiency, addressing key requirements for practical fault-tolerant quantum computing systems.
翻译:量子纠错解码面临一个根本性的精度-效率权衡问题。经典方法如最小权重完美匹配在不同噪声模型下表现出不稳定的性能,且具有多项式复杂度;张量网络解码器虽能达到高精度,但计算成本过高。近期出现的神经解码器降低了复杂度,但缺乏与计算昂贵的经典方法竞争的精度。我们提出了SAQ-Decoder,这是一个将基于Transformer的学习与约束感知后处理相结合的统一框架,实现了接近最大似然的精度以及与校验子规模呈线性关系的计算可扩展性。我们的方法结合了一个双流Transformer架构,该架构通过非对称注意力模式处理校验子和逻辑信息,以及一种新颖的可微逻辑损失函数,该函数通过有限域上的平滑近似直接优化逻辑错误率。SAQ-Decoder在环面码上实现了接近最优的性能,其错误阈值在独立噪声下达到10.99%,在去极化噪声下达到18.6%,分别接近11.0%和18.9%的最大似然界限,同时在精度、复杂度和参数效率方面优于现有的神经和经典基线方法。我们的研究结果表明,学习型解码器能够同时实现有竞争力的解码精度和计算效率,满足了实用容错量子计算系统的关键需求。