The success of current Large-Language Models (LLMs) hinges on extensive training data that is collected and stored centrally, called Centralized Learning (CL). However, such a collection manner poses a privacy threat, and one potential solution is Federated Learning (FL), which transfers gradients, not raw data, among clients. Unlike traditional networks, FL for LLMs incurs significant communication costs due to their tremendous parameters. This study introduces an innovative approach to compress gradients to improve communication efficiency during LLM FL, formulating the new FL pipeline named CG-FedLLM. This approach integrates an encoder on the client side to acquire the compressed gradient features and a decoder on the server side to reconstruct the gradients. We also developed a novel training strategy that comprises Temporal-ensemble Gradient-Aware Pre-training (TGAP) to identify characteristic gradients of the target model and Federated AutoEncoder-Involved Fine-tuning (FAF) to compress gradients adaptively. Extensive experiments confirm that our approach reduces communication costs and improves performance (e.g., average 3 points increment compared with traditional CL- and FL-based fine-tuning with LlaMA on a well-recognized benchmark, C-Eval). This improvement is because our encoder-decoder, trained via TGAP and FAF, can filter gradients while selectively preserving critical features. Furthermore, we present a series of experimental analyses focusing on the signal-to-noise ratio, compression rate, and robustness within this privacy-centric framework, providing insight into developing more efficient and secure LLMs.
翻译:当前大语言模型(LLM)的成功依赖于集中收集和存储的海量训练数据,即集中式学习(CL)。然而,这种数据收集方式带来了隐私威胁,一种潜在的解决方案是联邦学习(FL),它在客户端之间传输梯度而非原始数据。与传统网络不同,LLM的联邦学习因其海量参数而产生了显著的通信开销。本研究提出了一种创新的梯度压缩方法,以提升LLM联邦学习中的通信效率,并构建了名为CG-FedLLM的新联邦学习流程。该方法在客户端集成编码器以获取压缩后的梯度特征,在服务器端集成解码器以重建梯度。我们还开发了一种新颖的训练策略,包括用于识别目标模型特征梯度的时域集成梯度感知预训练(TGAP),以及用于自适应压缩梯度的联邦自编码器参与微调(FAF)。大量实验证实,我们的方法降低了通信成本并提升了性能(例如,在公认基准C-Eval上使用LlaMA进行测试,相比传统的基于CL和FL的微调方法平均提升3分)。这一改进源于我们通过TGAP和FAF训练的编码器-解码器能够过滤梯度,同时选择性地保留关键特征。此外,我们围绕该隐私保护框架下的信噪比、压缩率和鲁棒性进行了一系列实验分析,为开发更高效、更安全的LLM提供了见解。