The quest for successful variational quantum machine learning (QML) relies on the design of suitable parametrized quantum circuits (PQCs), as analogues to neural networks in classical machine learning. Successful QML models must fulfill the properties of trainability and non-dequantization, among others. Recent works have highlighted an intricate interplay between trainability and dequantization of such models, which is still unresolved. In this work we contribute to this debate from the perspective of machine learning, proving a number of results identifying, among others when trainability and non-dequantization are not mutually exclusive. We begin by providing a number of new somewhat broader definitions of the relevant concepts, compared to what is found in other literature, which are operationally motivated, and consistent with prior art. With these precise definitions given and motivated, we then study the relation between trainability and dequantization of variational QML. Next, we also discuss the degrees of "variationalness" of QML models, where we distinguish between models like the hardware efficient ansatz and quantum kernel methods. Finally, we introduce recipes for building PQC-based QML models which are both trainable and nondequantizable, and corresponding to different degrees of variationalness. We do not address the practical utility for such models. Our work however does point toward a way forward for finding more general constructions, for which finding applications may become feasible.
翻译:实现成功的变分量子机器学习(QML)依赖于设计合适的参数化量子电路(PQCs),这类似于经典机器学习中的神经网络。成功的QML模型必须满足可训练性和非去量子化等特性。近期研究揭示了此类模型的可训练性与去量子化之间存在错综复杂的相互作用,这一关系尚未得到解决。本研究从机器学习的视角对此议题作出贡献,通过证明若干结论,特别指出了可训练性与非去量子化并非互斥的情形。我们首先对相关概念给出了若干较现有文献更宽泛的新定义,这些定义具有操作动机且与现有理论体系一致。在明确定义并阐明其动机后,我们系统研究了变分QML的可训练性与去量子化关系。接着,我们探讨了QML模型的"变分程度"差异,区分了硬件高效拟设与量子核方法等不同类型的模型。最后,我们提出了构建兼具可训练性与非去量子化特性的PQC基QML模型的方法论,这些模型对应不同的变分程度。本文未涉及此类模型的实际应用价值,但为探索更通用的构建方案指明了方向,这将为未来实现实际应用提供可能路径。