Federated Learning (FL) has garnered significant attention for its potential to protect user privacy while enhancing model training efficiency. For that reason, FL has found its use in various domains, from healthcare to industrial engineering, especially where data cannot be easily exchanged due to sensitive information or privacy laws. However, recent research has demonstrated that FL protocols can be easily compromised by active reconstruction attacks executed by dishonest servers. These attacks involve the malicious modification of global model parameters, allowing the server to obtain a verbatim copy of users' private data by inverting their gradient updates. Tackling this class of attack remains a crucial challenge due to the strong threat model. In this paper, we propose a defense mechanism, namely OASIS, based on image augmentation that effectively counteracts active reconstruction attacks while preserving model performance. We first uncover the core principle of gradient inversion that enables these attacks and theoretically identify the main conditions by which the defense can be robust regardless of the attack strategies. We then construct our defense with image augmentation showing that it can undermine the attack principle. Comprehensive evaluations demonstrate the efficacy of the defense mechanism highlighting its feasibility as a solution.
翻译:联邦学习(Federated Learning, FL)因其在提升模型训练效率的同时保护用户隐私的潜力而受到广泛关注。正因如此,联邦学习已在从医疗健康到工业工程等多个领域得到应用,特别是在因敏感信息或隐私法规导致数据难以直接交换的场景中。然而,近期研究表明,联邦学习协议极易受到由不诚实服务器发起的主动重构攻击的破坏。这类攻击通过对全局模型参数进行恶意篡改,使服务器能够通过反演用户的梯度更新来获取其私有数据的完整副本。由于此类攻击威胁模型强大,应对这类攻击仍是一个关键挑战。本文提出一种基于图像增强的防御机制——OASIS,该机制在保持模型性能的同时,能有效抵御主动重构攻击。我们首先揭示了实现此类攻击的梯度反演核心原理,并从理论上确定了防御机制能够不受攻击策略影响而保持鲁棒性的主要条件。随后,我们构建了基于图像增强的防御方案,证明其能够破坏攻击原理。综合评估验证了该防御机制的有效性,并突显了其作为解决方案的可行性。