By leveraging the principles of quantum mechanics, QML opens doors to novel approaches in machine learning and offers potential speedup. However, machine learning models are well-documented to be vulnerable to malicious manipulations, and this susceptibility extends to the models of QML. This situation necessitates a thorough understanding of QML's resilience against adversarial attacks, particularly in an era where quantum computing capabilities are expanding. In this regard, this paper examines model-independent bounds on adversarial performance for QML. To the best of our knowledge, we introduce the first computation of an approximate lower bound for adversarial error when evaluating model resilience against sophisticated quantum-based adversarial attacks. Experimental results are compared to the computed bound, demonstrating the potential of QML models to achieve high robustness. In the best case, the experimental error is only 10% above the estimated bound, offering evidence of the inherent robustness of quantum models. This work not only advances our theoretical understanding of quantum model resilience but also provides a precise reference bound for the future development of robust QML algorithms.
翻译:通过利用量子力学原理,量子机器学习(QML)为机器学习开辟了新途径,并提供了潜在的加速优势。然而,机器学习模型已被充分证明易受恶意操纵,这种脆弱性同样延伸至QML模型。在量子计算能力不断扩展的时代,这要求我们深入理解QML对抗对抗性攻击的鲁棒性。为此,本文研究了QML对抗性能的模型无关界。据我们所知,我们首次提出了在评估模型抵御复杂量子对抗攻击时的对抗误差近似下界的计算方法。实验结果与计算得到的界进行了比较,证明了QML模型实现高鲁棒性的潜力。在最佳情况下,实验误差仅比估计界高出10%,这为量子模型固有的鲁棒性提供了证据。这项工作不仅推进了我们对量子模型鲁棒性的理论理解,也为未来开发鲁棒的QML算法提供了精确的参考界。