We present a unified information-theoretic framework to analyze the generalization performance of differentially private (DP) quantum learning algorithms. By leveraging the connection between privacy and algorithmic stability, we establish that $(\varepsilon, δ)$-Quantum Differential Privacy (QDP) imposes a strong constraint on the mutual information between the training data and the algorithm's output. We derive a rigorous, mechanism-agnostic upper bound on this mutual information for learning algorithms satisfying a 1-neighbor privacy constraint. Furthermore, we connect this stability guarantee to generalization, proving that the expected generalization error of any $(\varepsilon, δ)$-QDP learning algorithm is bounded by the square root of the privacy-induced stability term. Finally, we extend our framework to the setting of an untrusted Data Processor, introducing the concept of Information-Theoretic Admissibility (ITA) to characterize the fundamental limits of privacy in scenarios where the learning map itself must remain oblivious to the specific dataset instance.
翻译:我们提出了一个统一的信息论框架,用于分析差分隐私(DP)量子学习算法的泛化性能。通过利用隐私与算法稳定性之间的联系,我们证明了$(\varepsilon, \delta)$-量子差分隐私(QDP)对训练数据与算法输出之间的互信息施加了强约束。我们为满足1-邻域隐私约束的学习算法,推导了该互信息的一个严格且与机制无关的上界。此外,我们将此稳定性保证与泛化联系起来,证明了任何$(\varepsilon, \delta)$-QDP学习算法的期望泛化误差均以隐私诱导的稳定性项的平方根为界。最后,我们将框架扩展到不可信数据处理器的场景,引入了信息论可容许性(ITA)的概念,以刻画在算法映射本身必须对特定数据集实例保持不可知的情景中隐私的基本极限。