Federated learning (FL) addresses data privacy and silo issues in large language models (LLMs). Most prior work focuses on improving the training efficiency of federated LLMs. However, security in open environments is overlooked, particularly defenses against malicious clients. To investigate the safety of LLMs during FL, we conduct preliminary experiments to analyze potential attack surfaces and defensible characteristics from the perspective of Low-Rank Adaptation (LoRA) weights. We find two key properties of FL: 1) LLMs are vulnerable to attacks from malicious clients in FL, and 2) LoRA weights exhibit distinct behavioral patterns that can be filtered through simple classifiers. Based on these properties, we propose Safe-FedLLM, a probe-based defense framework for federated LLMs, constructing defenses across three dimensions: Step-Level, Client-Level, and Shadow-Level. The core concept of Safe-FedLLM is to perform probe-based discrimination on the LoRA weights locally trained by each client during FL, treating them as high-dimensional behavioral features and using lightweight classification models to determine whether they possess malicious attributes. Extensive experiments demonstrate that Safe-FedLLM effectively enhances the defense capability of federated LLMs without compromising performance on benign data. Notably, our method effectively suppresses malicious data impact without significant impact on training speed, and remains effective even with many malicious clients. Our code is available at: https://github.com/dmqx/Safe-FedLLM.
翻译:联邦学习(FL)解决了大语言模型(LLM)中的数据隐私与数据孤岛问题。先前的研究大多集中于提升联邦LLM的训练效率,然而在开放环境下的安全性问题,特别是针对恶意客户端的防御,常被忽视。为探究LLM在联邦学习过程中的安全性,我们进行了初步实验,从低秩自适应(LoRA)权重的角度分析了潜在的攻击面和可防御特征。我们发现了联邦学习的两个关键特性:1)LLM在联邦学习中易受恶意客户端攻击;2)LoRA权重表现出独特的行为模式,可通过简单分类器进行过滤。基于这些特性,我们提出了Safe-FedLLM,一个基于探针的联邦LLM防御框架,从三个维度构建防御:步骤级、客户端级和影子级。Safe-FedLLM的核心思想是在联邦学习过程中,对每个客户端本地训练的LoRA权重进行基于探针的判别,将其视为高维行为特征,并使用轻量级分类模型来判断其是否具有恶意属性。大量实验表明,Safe-FedLLM能有效增强联邦LLM的防御能力,且不影响其在良性数据上的性能。值得注意的是,我们的方法能有效抑制恶意数据的影响,且对训练速度无显著影响,即使在存在大量恶意客户端的情况下依然有效。我们的代码公开于:https://github.com/dmqx/Safe-FedLLM。