Protecting data privacy in deep learning (DL) is of crucial importance. Several celebrated privacy notions have been established and used for privacy-preserving DL. However, many existing mechanisms achieve privacy at the cost of significant utility degradation and computational overhead. In this paper, we propose a stochastic differential equation-based residual perturbation for privacy-preserving DL, which injects Gaussian noise into each residual mapping of ResNets. Theoretically, we prove that residual perturbation guarantees differential privacy (DP) and reduces the generalization gap of DL. Empirically, we show that residual perturbation is computationally efficient and outperforms the state-of-the-art differentially private stochastic gradient descent (DPSGD) in utility maintenance without sacrificing membership privacy.
翻译:在深度学习中保护数据隐私至关重要。目前已有多种著名的隐私概念被确立并用于隐私保护的深度学习。然而,许多现有机制以显著牺牲模型效用和增加计算开销为代价来实现隐私保护。本文提出一种基于随机微分方程的残差扰动方法,用于隐私保护的深度学习,该方法将高斯噪声注入ResNet的每个残差映射中。理论上,我们证明残差扰动能够保证差分隐私,并减少深度学习的泛化差距。实证研究表明,残差扰动具有计算高效性,并且在保持成员隐私的前提下,其效用维护能力优于当前最先进的差分隐私随机梯度下降方法。