Quantum neural networks (QNNs) require an efficient training algorithm to achieve practical quantum advantages. A promising approach is the use of gradient-based optimization algorithms, where gradients are estimated through quantum measurements. However, it is generally difficult to efficiently measure gradients in QNNs because the quantum state collapses upon measurement. In this work, we prove a general trade-off between gradient measurement efficiency and expressivity in a wide class of deep QNNs, elucidating the theoretical limits and possibilities of efficient gradient estimation. This trade-off implies that a more expressive QNN requires a higher measurement cost in gradient estimation, whereas we can increase gradient measurement efficiency by reducing the QNN expressivity to suit a given task. We further propose a general QNN ansatz called the stabilizer-logical product ansatz (SLPA), which can reach the upper limit of the trade-off inequality by leveraging the symmetric structure of the quantum circuit. In learning an unknown symmetric function, the SLPA drastically reduces the quantum resources required for training while maintaining accuracy and trainability compared to a well-designed symmetric circuit based on the parameter-shift method. Our results not only reveal a theoretical understanding of efficient training in QNNs but also provide a standard and broadly applicable efficient QNN design.
翻译:量子神经网络(QNNs)需要高效的训练算法以实现实用的量子优势。一种有前景的方法是使用基于梯度的优化算法,其中梯度通过量子测量进行估计。然而,由于量子态在测量时会发生坍缩,通常难以高效地测量QNN中的梯度。在本工作中,我们证明了一大类深度QNN中梯度测量效率与表达能力之间存在普遍的权衡关系,阐明了高效梯度估计的理论极限与可能性。这一权衡意味着表达能力更强的QNN在梯度估计中需要更高的测量成本,而我们可以通过降低QNN的表达能力以适应给定任务来提高梯度测量效率。我们进一步提出了一种称为稳定子-逻辑乘积拟设(SLPA)的通用QNN拟设,该拟设能够利用量子电路的对称结构达到权衡不等式的上限。在学习未知对称函数时,与基于参数平移法精心设计的对称电路相比,SLPA在保持精度和可训练性的同时,大幅减少了训练所需的量子资源。我们的结果不仅揭示了QNN高效训练的理论理解,还提供了一个标准且广泛适用的高效QNN设计框架。