When applying quantum computing to machine learning tasks, one of the first considerations is the design of the quantum machine learning model itself. Conventionally, the design of quantum machine learning algorithms relies on the ``quantisation" of classical learning algorithms, such as using quantum linear algebra to implement important subroutines of classical algorithms, if not the entire algorithm, seeking to achieve quantum advantage through possible run-time accelerations brought by quantum computing. However, recent research has started questioning whether quantum advantage via speedup is the right goal for quantum machine learning [1]. Research also has been undertaken to exploit properties that are unique to quantum systems, such as quantum contextuality, to better design quantum machine learning models [2]. In this paper, we take an alternative approach by incorporating the heuristics and empirical evidences from the design of classical deep learning algorithms to the design of quantum neural networks. We first construct a model based on the data reuploading circuit [3] with the quantum Hamiltonian data embedding unitary [4]. Through numerical experiments on images datasets, including the famous MNIST and FashionMNIST datasets, we demonstrate that our model outperforms the quantum convolutional neural network (QCNN)[5] by a large margin (up to over 40% on MNIST test set). Based on the model design process and numerical results, we then laid out six principles for designing quantum machine learning models, especially quantum neural networks.
翻译:在将量子计算应用于机器学习任务时,首要考虑之一便是量子机器学习模型本身的设计。传统上,量子机器学习算法的设计依赖于对经典学习算法的“量子化”,例如使用量子线性代数来实现经典算法的重要子程序(若非整个算法),旨在通过量子计算可能带来的运行时加速实现量子优势。然而,近期研究开始质疑通过加速实现量子优势是否是量子机器学习的正确目标[1]。同时,也有研究致力于利用量子系统特有的性质,如量子上下文性,以更好地设计量子机器学习模型[2]。本文采用一种不同的方法,将经典深度学习算法设计中的启发式方法和经验证据融入量子神经网络的设计中。我们首先基于数据重传电路[3]与量子哈密顿量数据嵌入酉算子[4]构建了一个模型。通过在图像数据集(包括著名的MNIST和FashionMNIST数据集)上的数值实验,我们证明我们的模型大幅优于量子卷积神经网络(QCNN)[5](在MNIST测试集上提升超过40%)。基于模型设计过程和数值结果,我们进而提出了设计量子机器学习模型(尤其是量子神经网络)的六项原则。