We present a unified information-theoretic framework elucidating the interplay between stability, privacy, and the generalization performance of quantum learning algorithms. We establish a bound on the expected generalization error in terms of quantum mutual information and derive a probabilistic upper bound that generalizes the classical result by Esposito et al. (2021). Complementing these findings, we provide a lower bound on the expected true loss relative to the expected empirical loss. Additionally, we demonstrate that $(\varepsilon, δ)$-quantum differentially private learning algorithms are stable, thereby ensuring strong generalization guarantees. Finally, we extend our analysis to dishonest learning algorithms, introducing Information-Theoretic Admissibility (ITA) to characterize the fundamental limits of privacy when the learning algorithm is oblivious to specific dataset instances.
翻译:我们提出了一个统一的信息论框架,用以阐明量子学习算法中稳定性、隐私性与泛化性能之间的相互作用。我们基于量子互信息建立了期望泛化误差的上界,并推导出一个概率上界,该结果推广了Esposito等人(2021)的经典结论。作为这些发现的补充,我们给出了期望真实损失相对于期望经验损失的下界。此外,我们证明了$(\varepsilon, \delta)$-量子差分隐私学习算法具有稳定性,从而确保了强泛化保证。最后,我们将分析扩展到不诚实学习算法,引入信息论可容许性(ITA)来刻画当学习算法对特定数据集实例不可知时隐私性的基本极限。