Gradient Inversion (GI) attacks are a ubiquitous threat in Federated Learning (FL) as they exploit gradient leakage to reconstruct supposedly private training data. Common defense mechanisms such as Differential Privacy (DP) or stochastic Privacy Modules (PMs) introduce randomness during gradient computation to prevent such attacks. However, we pose that if an attacker effectively mimics a client's stochastic gradient computation, the attacker can circumvent the defense and reconstruct clients' private training data. This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms. As a result, we demonstrate that no individual defense provides sufficient privacy protection. To address this issue, we propose to combine multiple defenses. We conduct an extensive ablation study to evaluate the influence of various combinations of defenses on privacy protection and model utility. We observe that only the combination of DP and a stochastic PM was sufficient to decrease the Attack Success Rate (ASR) from 100% to 0%, thus preserving privacy. Moreover, we found that this combination of defenses consistently achieves the best trade-off between privacy and model utility.
翻译:梯度反演攻击是联邦学习中普遍存在的威胁,其通过利用梯度泄露来重构本应私有的训练数据。常见的防御机制,如差分隐私或随机隐私模块,通过在梯度计算中引入随机性来防止此类攻击。然而,我们提出,如果攻击者能有效模拟客户端的随机梯度计算过程,攻击者便可绕过防御并重构客户端的私有训练数据。本文介绍了几种基于此原理来绕过常见防御机制的有针对性梯度反演攻击。结果表明,没有任何单一防御机制能提供足够的隐私保护。为解决此问题,我们提出组合多种防御机制。我们进行了一项广泛的消融研究,以评估不同防御组合对隐私保护和模型效用的影响。我们观察到,只有将差分隐私与一个随机隐私模块结合,才能将攻击成功率从100%降至0%,从而保护隐私。此外,我们发现这种防御组合始终能在隐私与模型效用之间取得最佳权衡。