For reliable large-scale quantum computation, quantum error correction (QEC) is essential to protect logical information distributed across multiple physical qubits. Taking advantage of recent advances in deep learning, neural network-based decoders have emerged as a promising approach to improve the reliability of QEC. We propose the qubit-centric transformer (QCT), a novel and universal QEC decoder based on a transformer architecture with a qubit-centric attention mechanism. Our decoder transforms input syndromes from the stabilizer domain into qubit-centric tokens via a specialized embedding strategy. These qubit-centric tokens are processed through attention layers to effectively identify the underlying logical error. Furthermore, we introduce a graph-based masking method that incorporates the topological structure of quantum codes, enforcing attention toward relevant qubit interactions. Across various code distances for surface codes, QCT achieves state-of-the-art decoding performance, significantly outperforming existing neural decoders and the belief propagation (BP) with ordered statistics decoding (OSD) baseline. Notably, QCT achieves a high threshold of 18.1% under depolarizing noise, which closely approaches the theoretical bound of 18.9% and surpasses both the BP+OSD and the minimum-weight perfect matching (MWPM) thresholds. This qubit-centric approach provides a scalable and robust framework for surface code decoding, advancing the path toward fault-tolerant quantum computing.
翻译:为实现可靠的大规模量子计算,量子纠错(QEC)对于保护分布在多个物理量子比特上的逻辑信息至关重要。借助深度学习的最新进展,基于神经网络的解码器已成为提升QEC可靠性的重要途径。我们提出量子比特中心Transformer(QCT),这是一种基于Transformer架构的新型通用QEC解码器,其核心为量子比特中心注意力机制。该解码器通过专门的嵌入策略,将来自稳定子域的输入校验子转换为量子比特中心令牌。这些量子比特中心令牌经由注意力层处理,从而有效识别底层逻辑错误。此外,我们引入了一种基于图的掩码方法,该方法融入了量子码的拓扑结构,强制注意力聚焦于相关的量子比特相互作用。在表面码的不同码距测试中,QCT实现了最先进的解码性能,显著超越了现有神经解码器以及置信传播(BP)结合有序统计解码(OSD)的基线。值得注意的是,QCT在去极化噪声下达到了18.1%的高阈值,该值已接近18.9%的理论界限,并且超越了BP+OSD与最小权重完美匹配(MWPM)的阈值。这种量子比特中心的方法为表面码解码提供了一个可扩展且鲁棒的框架,推动了容错量子计算的发展进程。