Federated Learning (FL) enables collaborative model training across multiple devices while preserving data privacy. However, it remains susceptible to backdoor attacks, where malicious participants can compromise the global model. Existing defence methods are limited by strict assumptions on data heterogeneity (Non-Independent and Identically Distributed data) and the proportion of malicious clients, reducing their practicality and effectiveness. To overcome these limitations, we propose Robust Knowledge Distillation (RKD), a novel defence mechanism that enhances model integrity without relying on restrictive assumptions. RKD integrates clustering and model selection techniques to identify and filter out malicious updates, forming a reliable ensemble of models. It then employs knowledge distillation to transfer the collective insights from this ensemble to a global model. Extensive evaluations demonstrate that RKD effectively mitigates backdoor threats while maintaining high model performance, outperforming current state-of-the-art defence methods across various scenarios.
翻译:联邦学习(FL)能够在保护数据隐私的前提下,实现跨多设备的协同模型训练。然而,该方法仍易受后门攻击的影响,恶意参与者可能借此破坏全局模型。现有的防御方法受限于对数据异质性(非独立同分布数据)和恶意客户端比例的严格假设,降低了其实用性和有效性。为克服这些局限,我们提出了鲁棒知识蒸馏(RKD),这是一种新颖的防御机制,能在不依赖限制性假设的前提下增强模型完整性。RKD集成聚类与模型选择技术,以识别并过滤恶意更新,从而构建一个可靠的模型集成。随后,该方法采用知识蒸馏技术,将该集成中的集体知识迁移至全局模型。大量评估表明,RKD在维持高模型性能的同时,能有效缓解后门威胁,并在多种场景下优于当前最先进的防御方法。