Federated learning synchronizes models through gradient transmission and aggregation. However, these gradients pose significant privacy risks, as sensitive training data is embedded within them. Existing gradient inversion attacks suffer from significantly degraded reconstruction performance when gradients are perturbed by noise-a common defense mechanism. In this paper, we introduce Gradient-Guided Conditional Diffusion Models (GG-CDMs) for reconstructing private images from leaked gradients without prior knowledge of the target data distribution. Our approach leverages the inherent denoising capability of diffusion models to circumvent the partial protection offered by noise perturbation, thereby improving attack performance under such defenses. We further provide a theoretical analysis of the reconstruction error bounds and the convergence properties of attack loss, characterizing the impact of key factors-such as noise magnitude and attacked model architecture-on reconstruction quality. Extensive experiments demonstrate our attack's superior reconstruction performance with Gaussian noise-perturbed gradients, and confirm our theoretical findings.
翻译:联邦学习通过梯度传输与聚合实现模型同步。然而,这些梯度存在显著的隐私风险,因为敏感训练数据被嵌入其中。当梯度受到噪声扰动(一种常见的防御机制)时,现有的梯度反演攻击面临重建性能显著下降的问题。本文提出梯度引导条件扩散模型(GG-CDMs),用于在无需目标数据分布先验知识的情况下,从泄露的梯度中重建私有图像。该方法利用扩散模型固有的去噪能力,规避噪声扰动提供的部分保护,从而提升此类防御下的攻击性能。我们进一步对重建误差界和攻击损失收敛性进行了理论分析,阐明了噪声幅度、被攻击模型架构等关键因素对重建质量的影响。大量实验表明,我们的攻击在应对高斯噪声扰动梯度时具有优越的重建性能,并验证了理论分析结果。