Decentralized Federated Learning (DFL), a paradigm for managing big data in a privacy-preserved manner, is still vulnerable to poisoning attacks where malicious clients tamper with data or models. Current defense methods often assume Independently and Identically Distributed (IID) data, which is unrealistic in real-world applications. In non-IID contexts, existing defensive strategies face challenges in distinguishing between models that have been compromised and those that have been trained on heterogeneous data distributions, leading to diminished efficacy. In response, this paper proposes a framework that employs the Moving Target Defense (MTD) approach to bolster the robustness of DFL models. By continuously modifying the attack surface of the DFL system, this framework aims to mitigate poisoning attacks effectively. The proposed MTD framework includes both proactive and reactive modes, utilizing a reputation system that combines metrics of model similarity and loss, alongside various defensive techniques. Comprehensive experimental evaluations indicate that the MTD-based mechanism significantly mitigates a range of poisoning attack types across multiple datasets with different topologies.
翻译:分散式联邦学习作为一种以隐私保护方式处理大数据的范式,仍然容易受到恶意客户端篡改数据或模型的投毒攻击。现有防御方法通常假设数据满足独立同分布条件,这在现实应用中并不符合实际情况。在非独立同分布场景下,现有防御策略难以区分被篡改的模型与基于异构数据分布训练的模型,导致防御效能下降。为此,本文提出一种采用移动目标防御方法的框架,以增强分散式联邦学习模型的鲁棒性。该框架通过持续改变分散式联邦学习系统的攻击面,旨在有效缓解投毒攻击。所提出的移动目标防御框架包含主动与被动两种模式,采用融合模型相似性与损失度量的信誉系统,并结合多种防御技术。综合实验评估表明,基于移动目标防御的机制能显著缓解多种拓扑结构下不同数据集上的各类投毒攻击。