Large language models (LLMs) have not yet effectively leveraged the vast amounts of edge-device data, and federated learning (FL) offers a promising paradigm to collaboratively fine-tune LLMs without transferring private edge data to the cloud. To operate within the computation and communication constraints of edge devices, recent literature on federated fine-tuning of LLMs proposes the use of low-rank adaptation (LoRA) and similar parameter-efficient methods. However, LoRA-based methods suffer from accuracy degradation in FL settings, primarily because of data and computational heterogeneity across clients. We propose Ravan, an adaptive multi-head LoRA method that balances parameter efficiency and model expressivity by reparameterizing the weight updates as the sum of multiple LoRA heads $s_i\textbf{B}_i\textbf{H}_i\textbf{A}_i$ in which only the core matrices $\textbf{H}_i$ and their lightweight scaling factors $s_i$ are trained. These trainable scaling factors let the optimization focus on the most useful heads, recovering a higher-rank approximation of the full update without increasing the number of communicated parameters since clients upload $s_i\textbf{H}_i$ directly. Experiments on vision and language benchmarks show that Ravan improves test accuracy by $2-8\%$ over prior parameter-efficient baselines, making it a robust and scalable solution for federated fine-tuning of LLMs.
翻译:大型语言模型(LLMs)尚未有效利用海量边缘设备数据,而联邦学习(FL)为在不将私有边缘数据传输至云端的情况下协同微调LLMs提供了可行范式。为适应边缘设备的计算与通信限制,近期关于LLMs联邦微调的研究提出采用低秩自适应(LoRA)及类似参数高效方法。然而,基于LoRA的方法在联邦学习场景中面临精度下降问题,主要源于客户端间的数据与计算异构性。本文提出Ravan——一种自适应多头LoRA方法,通过将权重更新重参数化为多个LoRA头之和 $s_i\textbf{B}_i\textbf{H}_i\textbf{A}_i$(其中仅核心矩阵 $\textbf{H}_i$ 及其轻量级缩放因子 $s_i$ 参与训练),在参数效率与模型表达能力间取得平衡。这些可训练的缩放因子使优化过程聚焦于最有效的头部,在不增加通信参数量的情况下(客户端直接上传 $s_i\textbf{H}_i$)恢复更高秩的完整更新近似。在视觉与语言基准测试上的实验表明,Ravan相较于现有参数高效基线方法提升测试精度 $2-8\%$,为LLMs的联邦微调提供了鲁棒且可扩展的解决方案。