Deep gradient inversion attacks expose a serious threat to Federated Learning (FL) by accurately recovering private data from shared gradients. However, the state-of-the-art heavily relies on impractical assumptions to access excessive auxiliary data, which violates the basic data partitioning principle of FL. In this paper, a novel method, Gradient Inversion Attack using Practical Image Prior (GI-PIP), is proposed under a revised threat model. GI-PIP exploits anomaly detection models to capture the underlying distribution from fewer data, while GAN-based methods consume significant more data to synthesize images. The extracted distribution is then leveraged to regulate the attack process as Anomaly Score loss. Experimental results show that GI-PIP achieves a 16.12 dB PSNR recovery using only 3.8\% data of ImageNet, while GAN-based methods necessitate over 70\%. Moreover, GI-PIP exhibits superior capability on distribution generalization compared to GAN-based methods. Our approach significantly alleviates the auxiliary data requirement on both amount and distribution in gradient inversion attacks, hence posing more substantial threat to real-world FL.
翻译:深度梯度反转攻击通过从共享梯度中精确恢复私有数据,对联邦学习构成了严重威胁。然而,现有最优方法严重依赖不切实际的假设来获取过量辅助数据,这违背了联邦学习的基本数据分割原则。本文在修正威胁模型下提出了一种基于实用图像先验的梯度反转攻击方法(GI-PIP)。与基于生成对抗网络的方法需消耗显著更多数据合成图像不同,GI-PIP利用异常检测模型从较少数据中捕获潜在分布,并将提取的分布作为异常分数损失来调控攻击过程。实验结果表明,GI-PIP仅需使用ImageNet数据集3.8%的数据即可实现16.12 dB的峰值信噪比恢复效果,而基于生成对抗网络的方法需要超过70%的数据。此外,与生成对抗网络方法相比,GI-PIP在分布泛化能力上展现出更优性能。该方法显著降低了梯度反转攻击对辅助数据数量和分布的要求,从而对现实联邦学习构成更实质性的威胁。