Backdoor attacks present a significant threat to the robustness of Federated Learning (FL) due to their stealth and effectiveness. They maintain both the main task of the FL system and the backdoor task simultaneously, causing malicious models to appear statistically similar to benign ones, which enables them to evade detection by existing defense methods. We find that malicious parameters in backdoored models are inactive on the main task, resulting in a significantly large empirical loss during the machine unlearning process on clean inputs. Inspired by this, we propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL. To improve the performance of MASA in challenging non-independent and identically distributed (non-IID) settings, we design pre-unlearning model fusion that integrates local models with knowledge learned from other datasets to mitigate the divergence in their unlearning behaviors caused by the non-IID data distributions of clients. Additionally, we propose a new anomaly detection metric with minimal hyperparameters to filter out malicious models efficiently. Extensive experiments on IID and non-IID datasets across six different attacks validate the effectiveness of MASA. To the best of our knowledge, this is the first work to leverage machine unlearning to identify malicious models in FL. Code is available at \url{https://github.com/JiiahaoXU/MASA}.
翻译:后门攻击因其隐蔽性和有效性,对联邦学习(FL)的鲁棒性构成严重威胁。此类攻击在维持联邦学习系统主任务的同时,还能并行执行后门任务,导致恶意模型在统计特征上与良性模型高度相似,从而规避现有防御方法的检测。我们发现,后门模型中的恶意参数在主任务上处于非活跃状态,导致其在干净输入上进行机器遗忘过程中产生显著较大的经验损失。受此启发,我们提出MASA方法,该方法通过对本地模型实施个体遗忘来识别联邦学习中的恶意模型。为提升MASA在具有挑战性的非独立同分布(non-IID)场景下的性能,我们设计了预遗忘模型融合机制,通过将本地模型与其他数据集习得的知识相融合,以缓解客户端非独立同分布数据导致的遗忘行为差异。此外,我们提出了一种超参数极简的新型异常检测指标,以高效过滤恶意模型。在独立同分布与非独立同分布数据集上对六种不同攻击进行的广泛实验验证了MASA的有效性。据我们所知,这是首个利用机器遗忘技术识别联邦学习中恶意模型的研究。代码发布于 \url{https://github.com/JiiahaoXU/MASA}。