Perturbation-based mechanisms, such as differential privacy, mitigate gradient leakage attacks by introducing noise into the gradients, thereby preventing attackers from reconstructing clients' private data from the leaked gradients. However, can gradient perturbation protection mechanisms truly defend against all gradient leakage attacks? In this paper, we present the first attempt to break the shield of gradient perturbation protection in Federated Learning for the extraction of private information. We focus on common noise distributions, specifically Gaussian and Laplace, and apply our approach to DNN and CNN models. We introduce Mjolnir, a perturbation-resilient gradient leakage attack that is capable of removing perturbations from gradients without requiring additional access to the original model structure or external data. Specifically, we leverage the inherent diffusion properties of gradient perturbation protection to develop a novel diffusion-based gradient denoising model for Mjolnir. By constructing a surrogate client model that captures the structure of perturbed gradients, we obtain crucial gradient data for training the diffusion model. We further utilize the insight that monitoring disturbance levels during the reverse diffusion process can enhance gradient denoising capabilities, allowing Mjolnir to generate gradients that closely approximate the original, unperturbed versions through adaptive sampling steps. Extensive experiments demonstrate that Mjolnir effectively recovers the protected gradients and exposes the Federated Learning process to the threat of gradient leakage, achieving superior performance in gradient denoising and private data recovery.
翻译:基于扰动的机制(如差分隐私)通过向梯度中引入噪声来缓解梯度泄露攻击,从而防止攻击者从泄露的梯度中重构客户端的私有数据。然而,梯度扰动保护机制是否真的能抵御所有梯度泄露攻击?本文首次尝试打破联邦学习中梯度扰动保护的防护盾,以提取私有信息。我们聚焦于常见的噪声分布(特别是高斯分布和拉普拉斯分布),并将我们的方法应用于DNN和CNN模型。我们提出了Mjolnir——一种抗扰动的梯度泄露攻击方法,该方法能够在不额外获取原始模型结构或外部数据的情况下从梯度中去除扰动。具体而言,我们利用梯度扰动保护固有的扩散特性,为Mjolnir开发了一种新颖的基于扩散的梯度去噪模型。通过构建捕捉扰动梯度结构的代理客户端模型,我们获得了用于训练扩散模型的关键梯度数据。我们进一步利用以下洞见:在反向扩散过程中监测扰动水平可以增强梯度去噪能力,从而使Mjolnir能够通过自适应采样步骤生成高度逼近原始未扰动版本的梯度。大量实验表明,Mjolnir能有效恢复受保护的梯度,使联邦学习过程暴露于梯度泄露的威胁之下,并在梯度去噪和私有数据恢复方面实现了卓越的性能。