It is well known that artificial neural networks initialized from independent and identically distributed priors converge to Gaussian processes in the limit of a large number of neurons per hidden layer. In this work we prove an analogous result for Quantum Neural Networks (QNNs). Namely, we show that the outputs of certain models based on Haar random unitary or orthogonal deep QNNs converge to Gaussian processes in the limit of large Hilbert space dimension $d$. The derivation of this result is more nuanced than in the classical case due to the role played by the input states, the measurement observable, and the fact that the entries of unitary matrices are not independent. Then, we show that the efficiency of predicting measurements at the output of a QNN using Gaussian process regression depends on the observable's bodyness. Furthermore, our theorems imply that the concentration of measure phenomenon in Haar random QNNs is worse than previously thought, as we prove that expectation values and gradients concentrate as $\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$. Finally, we discuss how our results improve our understanding of concentration in $t$-designs.
翻译:众所周知,从独立同分布先验初始化的经典人工神经网络在每隐藏层神经元数量趋于无穷的极限下收敛于高斯过程。本工作证明了量子神经网络(QNNs)的类似结果。具体而言,我们证明了基于Haar随机酉或正交深度QNN的特定模型,其输出在大希尔伯特空间维度$d$的极限下收敛于高斯过程。由于输入态、测量可观测量以及酉矩阵元素非独立等因素的影响,该结果的推导比经典情形更为微妙。进一步,我们证明了利用高斯过程回归预测QNN输出测量值的效率取决于可观量的"体性"。此外,我们的定理表明Haar随机QNN中的测度集中现象比先前认知更为显著——我们证明了期望值和梯度的集中速度为$\mathcal{O}\left(\frac{1}{e^d \sqrt{d}}\right)$。最后,我们讨论了该结果如何深化对$t$-design中集中现象的理解。