Quantum Machine Learning (QML) promises significant computational advantages, but preserving training data privacy remains challenging. Classical approaches like differentially private stochastic gradient descent (DP-SGD) add noise to gradients but fail to exploit the unique properties of quantum gradient estimation. In this work, we introduce the Differentially Private Parameter-Shift Rule (Q-ShiftDP), the first privacy mechanism tailored to QML. By leveraging the inherent boundedness and stochasticity of quantum gradients computed via the parameter-shift rule, Q-ShiftDP enables tighter sensitivity analysis and reduces noise requirements. We combine carefully calibrated Gaussian noise with intrinsic quantum noise to provide formal privacy and utility guarantees, and show that harnessing quantum noise further improves the privacy-utility trade-off. Experiments on benchmark datasets demonstrate that Q-ShiftDP consistently outperforms classical DP methods in QML.
翻译:量子机器学习(QML)有望带来显著的计算优势,但保护训练数据隐私仍具挑战性。经典方法如差分隐私随机梯度下降(DP-SGD)通过对梯度添加噪声来实现隐私保护,但未能利用量子梯度估计的独特性质。本文提出了差分隐私参数平移规则(Q-ShiftDP),这是首个专为QML设计的隐私保护机制。通过利用参数平移规则计算的量子梯度固有的有界性和随机性,Q-ShiftDP能够实现更严格的敏感度分析并降低噪声需求。我们将精心校准的高斯噪声与固有的量子噪声相结合,提供了形式化的隐私性与效用性保证,并证明利用量子噪声能进一步改善隐私-效用权衡。在基准数据集上的实验表明,Q-ShiftDP在QML任务中始终优于经典的差分隐私方法。