Quantum computers progress toward outperforming classical supercomputers, but quantum errors remain their primary obstacle. The key to overcoming errors on near-term devices has emerged through the field of quantum error mitigation, enabling improved accuracy at the cost of additional run time. Here, through experiments on state-of-the-art quantum computers using up to 100 qubits, we demonstrate that without sacrificing accuracy machine learning for quantum error mitigation (ML-QEM) drastically reduces the cost of mitigation. We benchmark ML-QEM using a variety of machine learning models -- linear regression, random forests, multi-layer perceptrons, and graph neural networks -- on diverse classes of quantum circuits, over increasingly complex device-noise profiles, under interpolation and extrapolation, and in both numerics and experiments. These tests employ the popular digital zero-noise extrapolation method as an added reference. Finally, we propose a path toward scalable mitigation by using ML-QEM to mimic traditional mitigation methods with superior runtime efficiency. Our results show that classical machine learning can extend the reach and practicality of quantum error mitigation by reducing its overheads and highlight its broader potential for practical quantum computations.
翻译:量子计算机正逐步超越经典超级计算机,但量子误差仍是其主要障碍。近期设备克服误差的关键已通过量子纠错领域显现,其以额外运行时间为代价提升了计算精度。本文通过在多达100量子比特的先进量子计算机上进行实验,证明用于量子纠错的机器学习在不牺牲精度的前提下,能显著降低纠错成本。我们采用多种机器学习模型——线性回归、随机森林、多层感知器和图神经网络——在不同类型的量子电路上、针对日益复杂的设备噪声分布、在插值与外推场景下、通过数值模拟与实验验证,对ML-QEM进行了系统评估。这些测试以广泛使用的数字零噪声外推方法作为附加参照。最后,我们提出了一条可扩展的纠错路径:利用ML-QEM模拟传统纠错方法,同时获得更优的运行效率。研究结果表明,经典机器学习可通过降低纠错开销来拓展量子纠错的适用范围和实用性,并凸显了其在实现实用量子计算方面的广阔潜力。