Quantum Machine Learning (QML) amalgamates quantum computing paradigms with machine learning models, providing significant prospects for solving complex problems. However, with the expansion of numerous third-party vendors in the Noisy Intermediate-Scale Quantum (NISQ) era of quantum computing, the security of QML models is of prime importance, particularly against reverse engineering, which could expose trained parameters and algorithms of the models. We assume the untrusted quantum cloud provider is an adversary having white-box access to the transpiled user-designed trained QML model during inference. Reverse engineering (RE) to extract the pre-transpiled QML circuit will enable re-transpilation and usage of the model for various hardware with completely different native gate sets and even different qubit technology. Such flexibility may not be obtained from the transpiled circuit which is tied to a particular hardware and qubit technology. The information about the number of parameters, and optimized values can allow further training of the QML model to alter the QML model, tamper with the watermark, and/or embed their own watermark or refine the model for other purposes. In this first effort to investigate the RE of QML circuits, we perform RE and compare the training accuracy of original and reverse-engineered Quantum Neural Networks (QNNs) of various sizes. We note that multi-qubit classifiers can be reverse-engineered under specific conditions with a mean error of order 1e-2 in a reasonable time. We also propose adding dummy fixed parametric gates in the QML models to increase the RE overhead for defense. For instance, adding 2 dummy qubits and 2 layers increases the overhead by ~1.76 times for a classifier with 2 qubits and 3 layers with a performance overhead of less than 9%. We note that RE is a very powerful attack model which warrants further efforts on defenses.
翻译:量子机器学习(QML)将量子计算范式与机器学习模型相结合,为解决复杂问题提供了重要前景。然而,随着量子计算在噪声中尺度量子(NISQ)时代众多第三方供应商的扩展,QML模型的安全性至关重要,尤其是针对可能暴露模型训练参数和算法的反向工程。我们假设不可信的量子云提供商作为对手,在推理过程中对经过编译的用户设计训练QML模型拥有白盒访问权限。通过反向工程(RE)提取预编译的QML电路,将使得模型能够针对具有完全不同原生门集甚至不同量子比特技术的各种硬件进行重新编译和使用。这种灵活性可能无法从绑定于特定硬件和量子比特技术的编译后电路中获得。关于参数数量及优化值的信息,可支持对QML模型进行进一步训练以修改模型、篡改水印、嵌入自身水印或为其他目的优化模型。在此首次探索QML电路反向工程的研究中,我们执行了RE并比较了不同规模原始与反向工程量子神经网络(QNNs)的训练精度。我们注意到,在特定条件下,多量子比特分类器可在合理时间内以平均误差约1e-2的量级被反向工程。我们还提出在QML模型中添加虚拟固定参数门以增加RE开销进行防御。例如,对于一个具有2个量子比特和3层的分类器,添加2个虚拟量子比特和2个层可使开销增加约1.76倍,而性能开销低于9%。我们指出,RE是一种极具威胁的攻击模型,需要进一步研究防御措施。