Federated Learning is a privacy preserving decentralized machine learning paradigm designed to collaboratively train models across multiple clients by exchanging gradients to the server and keeping private data local. Nevertheless, recent research has revealed that the security of Federated Learning is compromised, as private ground truth data can be recovered through a gradient inversion technique known as Deep Leakage. While these attacks are crafted with a focus on applications in Federated Learning, they generally are not evaluated in realistic scenarios. This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses within a realistic Federated context. By implementing a unified benchmark that encompasses multiple state-of-the-art Deep Leakage techniques and various defense strategies, our framework facilitates the evaluation and comparison of the efficacy of these methods across different datasets and training states. This work highlights a crucial trade-off between privacy and model accuracy in Federated Learning and aims to advance the understanding of security challenges in decentralized machine learning systems, stimulate future research, and enhance reproducibility in evaluating Deep Leakage attacks and defenses.
翻译:联邦学习是一种保护隐私的分布式机器学习范式,旨在通过向服务器交换梯度并保持私有数据本地化,跨多个客户端协同训练模型。然而,近期研究表明,联邦学习的安全性存在隐患,因为私有真实数据可以通过一种称为深度泄漏的梯度反演技术被恢复。尽管这些攻击主要针对联邦学习应用而设计,但通常未在真实场景中进行评估。本文介绍了FEDLAD框架(联邦学习中深度泄漏攻击与防御的评估框架),这是一个在真实联邦学习环境下评估深度泄漏攻击与防御的综合基准。通过实现一个统一的基准,涵盖多种先进的深度泄漏技术和各类防御策略,我们的框架促进了这些方法在不同数据集和训练状态下的效能评估与比较。这项工作揭示了联邦学习中隐私与模型准确性之间的关键权衡,旨在增进对分布式机器学习系统安全挑战的理解,推动未来研究,并提升深度泄漏攻击与防御评估的可复现性。