Quantum Machine Learning (QML) is becoming increasingly prevalent due to its potential to enhance classical machine learning (ML) tasks, such as classification. Although quantum noise is often viewed as a major challenge in quantum computing, it also offers a unique opportunity to enhance privacy. In particular, intrinsic quantum noise provides a natural stochastic resource that, when rigorously analyzed within the differential privacy (DP) framework and composed with classical mechanisms, can satisfy formal $(\varepsilon, δ)$-DP guarantees. This enables a reduction in the required classical perturbation without compromising the privacy budget, potentially improving model utility. However, the integration of classical and quantum noise for privacy preservation remains unexplored. In this work, we propose a hybrid noise-added mechanism, HYPER-Q, that combines classical and quantum noise to protect the privacy of QML models. We provide a comprehensive analysis of its privacy guarantees and establish theoretical bounds on its utility. Empirically, we demonstrate that HYPER-Q outperforms existing classical noise-based mechanisms in terms of adversarial robustness across multiple real-world datasets.
翻译:量子机器学习(QML)因其在增强经典机器学习(ML)任务(如分类)方面的潜力而日益普及。尽管量子噪声常被视为量子计算中的主要挑战,但它也为增强隐私提供了独特的机会。具体而言,本征量子噪声提供了一种天然的随机资源,当其在差分隐私(DP)框架内进行严格分析并与经典机制结合时,能够满足形式化的 $(\varepsilon, \delta)$-DP 保证。这使得在不损害隐私预算的前提下减少所需的经典扰动成为可能,从而有望提升模型效用。然而,利用经典噪声与量子噪声相结合以实现隐私保护的研究仍属空白。本文提出了一种混合噪声添加机制 HYPER-Q,该机制结合经典噪声与量子噪声以保护 QML 模型的隐私。我们对其隐私保证进行了全面分析,并建立了其效用的理论界。通过实证研究,我们在多个真实数据集上证明,HYPER-Q 在对抗鲁棒性方面优于现有的基于经典噪声的机制。