Quantum neural networks (QNNs) use parameterized quantum circuits with data-dependent inputs and generate outputs through the evaluation of expectation values. Calculating these expectation values necessitates repeated circuit evaluations, thus introducing fundamental finite-sampling noise even on error-free quantum computers. We reduce this noise by introducing the variance regularization, a technique for reducing the variance of the expectation value during the quantum model training. This technique requires no additional circuit evaluations if the QNN is properly constructed. Our empirical findings demonstrate the reduced variance speeds up the training and lowers the output noise as well as decreases the number of necessary evaluations of gradient circuits. This regularization method is benchmarked on the regression of multiple functions and the potential energy surface of water. We show that in our examples, it lowers the variance by an order of magnitude on average and leads to a significantly reduced noise level of the QNN. We finally demonstrate QNN training on a real quantum device and evaluate the impact of error mitigation. Here, the optimization is feasible only due to the reduced number of necessary shots in the gradient evaluation resulting from the reduced variance.
翻译:量子神经网络(QNNs)采用具有数据依赖输入的参数化量子电路,并通过计算期望值来生成输出。计算这些期望值需要重复的电路评估,因此即使在无误差的量子计算机上也会引入固有的有限采样噪声。我们通过引入方差正则化技术来降低这种噪声,该技术可在量子模型训练过程中减少期望值的方差。如果QNN构建得当,此技术无需额外的电路评估。我们的实证结果表明,降低的方差加速了训练过程,降低了输出噪声,并减少了梯度电路的必要评估次数。该正则化方法在多个函数的回归任务以及水的势能面预测上进行了基准测试。我们证明,在我们的示例中,该方法平均将方差降低了一个数量级,并显著降低了QNN的噪声水平。最后,我们在真实量子设备上演示了QNN训练,并评估了误差缓解的影响。在此,优化之所以可行,正是得益于方差降低所带来的梯度评估所需测量次数的减少。