Federated Learning (FL) has recently been applied to the parameter-efficient fine-tuning of Large Language Models (LLMs). While promising, it raises significant challenges due to the heterogeneous resources and data distributions of clients. This study introduces FlexLoRA, a simple yet effective aggregation scheme for LLM fine-tuning, which mitigates the ``bucket effect'' in traditional FL that restricts the potential of clients with ample resources by tying them to the capabilities of the least-resourced participants. FlexLoRA allows for dynamic adjustment of local LoRA ranks, fostering the development of a global model imbued with broader, less task-specific knowledge. By synthesizing a full-size LoRA weight from individual client contributions and employing Singular Value Decomposition (SVD) for weight redistribution, FlexLoRA fully leverages heterogeneous client resources. Involving thousands of clients performing heterogeneous NLP tasks and client resources, our experiments validate the efficacy of FlexLoRA, with the federated global model achieving consistently better improvement over SOTA FL methods in downstream NLP task performance across various heterogeneous distributions. FlexLoRA's practicality is further underscored by our theoretical analysis and its seamless integration with existing LoRA-based FL methods, offering a path toward cross-device, privacy-preserving federated tuning for LLMs.
翻译:联邦学习(FL)最近已被应用于大语言模型(LLM)的参数高效微调。尽管前景广阔,但由于客户端的异构资源与数据分布,这带来了重大挑战。本研究提出了FlexLoRA,一种用于LLM微调的简单而有效的聚合方案,它缓解了传统FL中的“短板效应”——该效应将资源充足的客户端的能力与资源最匮乏的参与者的能力绑定,从而限制了前者的潜力。FlexLoRA允许动态调整本地LoRA秩,促进开发一个蕴含更广泛、更少任务特定知识的全局模型。通过综合来自各个客户端贡献的全尺寸LoRA权重,并采用奇异值分解(SVD)进行权重再分配,FlexLoRA充分利用了异构的客户端资源。在涉及数千个执行异构NLP任务且拥有异构资源的客户端的实验中,我们验证了FlexLoRA的有效性:在各种异构分布下,联邦全局模型在下游NLP任务性能上始终比SOTA FL方法取得更优的提升。我们的理论分析及其与现有基于LoRA的FL方法的无缝集成,进一步凸显了FlexLoRA的实用性,为LLM的跨设备、隐私保护的联邦调优提供了一条路径。