Given sufficient data from multiple edge devices, federated learning (FL) enables training a shared model without transmitting private data to a central server. However, FL is generally vulnerable to Byzantine attacks from compromised edge devices, which can significantly degrade the model performance. In this paper, we propose a intuitive plugin that can be integrated into existing FL techniques to achieve Byzantine-Resilience. Key idea is to generate virtual data samples and evaluate model consistency scores across local updates to effectively filter out compromised edge devices. By utilizing this scoring mechanism before the aggregation phase, the proposed plugin enables existing FL techniques to become robust against Byzantine attacks while maintaining their original benefits. Numerical results on medical image classification task validate that plugging the proposed approach into representative FL algorithms, effectively achieves Byzantine resilience. Furthermore, the proposed plugin maintains the original convergence properties of the base FL algorithms when no Byzantine attacks are present.
翻译:给定来自多个边缘设备的充足数据,联邦学习(FL)能够在无需将私有数据传输至中央服务器的前提下训练共享模型。然而,FL通常易受来自被攻陷边缘设备的拜占庭攻击,这会显著降低模型性能。本文提出一种直观的插件,可集成至现有FL技术中以实现拜占庭鲁棒性。其核心思想是生成虚拟数据样本,并通过评估本地更新间的模型一致性评分,以有效过滤被攻陷的边缘设备。通过在聚合阶段前利用该评分机制,所提出的插件使现有FL技术在保持其原有优势的同时,能够有效抵御拜占庭攻击。在医学图像分类任务上的数值结果表明,将所提方法嵌入代表性FL算法中能有效实现拜占庭鲁棒性。此外,当不存在拜占庭攻击时,该插件能够保持基础FL算法原有的收敛特性。