In the current quantum computing paradigm, significant focus is placed on the reduction or mitigation of quantum decoherence. When designing new quantum processing units, the general objective is to reduce the amount of noise qubits are subject to, and in algorithm design, a large effort is underway to provide scalable error correction or mitigation techniques. Yet some previous work has indicated that certain classes of quantum algorithms, such as quantum machine learning, may, in fact, be intrinsically robust to or even benefit from the presence of a small amount of noise. Here, we demonstrate that noise levels in quantum hardware can be effectively tuned to enhance the ability of quantum neural networks to generalize data, acting akin to regularisation in classical neural networks. As an example, we consider two regression tasks, where, by tuning the noise level in the circuit, we demonstrated improvement of the validation mean squared error loss. Moreover, we demonstrate the method's effectiveness by numerically simulating quantum neural network training on a realistic model of a noisy superconducting quantum computer.
翻译:在当前量子计算范式中,显著关注点集中于量子退相干的减少或缓解。设计新型量子处理单元时,普遍目标在于降低量子比特所受噪声量级;在算法设计领域,大规模研究正致力于提供可扩展的误差校正或缓解技术。然而先前研究表明,某些量子算法类别(如量子机器学习)实际上可能对小量噪声具有内在鲁棒性,甚至能从中获益。本文论证了通过有效调控量子硬件中的噪声水平,可增强量子神经网络的数据泛化能力,其作用类似于经典神经网络中的正则化机制。我们以两个回归任务为例,通过调控电路噪声水平,验证了该方法能有效改善验证集均方误差损失。此外,我们通过在超导量子计算机噪声模型上进行量子神经网络训练的数值模拟,进一步验证了该方法的有效性。