Large Language Models (LLMs) have achieved remarkable success across a wide range of tasks, with fine-tuning playing a pivotal role in adapting them to specific downstream applications. Federated Learning (FL) offers a promising approach that enables collaborative model adaptation while ensuring data privacy, i.e., FedLLM. In this survey, we provide a systematic and thorough review of the integration of LLMs with FL. Specifically, we first trace the historical evolution of both LLMs and FL, while summarizing relevant prior surveys. We then present an in-depth analysis of the fundamental challenges encountered in deploying FedLLM. Following this, we conduct an extensive study of existing parameter-efficient fine-tuning (PEFT) methods and explore their applicability in FL. Furthermore, we introduce a comprehensive evaluation benchmark to rigorously assess FedLLM performance and discuss its diverse real-world applications across multiple domains. Finally, we identify critical open challenges and outline promising research directions to drive future advancements in FedLLM. We maintain an active \href{https://github.com/Clin0212/Awesome-Federated-LLM-Learning}{GitHub repository} tracking cutting-edge advancements. This survey serves as a foundational resource for researchers and practitioners, offering insights into the evolving landscape of federated fine-tuning for LLMs while guiding future innovations in privacy-preserving AI.
翻译:大型语言模型(LLMs)在广泛的任务中取得了显著成功,而微调在使其适应特定下游应用方面发挥着关键作用。联邦学习(FL)提供了一种有前景的方法,能够在确保数据隐私的同时实现协作式模型适应,即联邦大语言模型(FedLLM)。本综述对LLMs与FL的融合进行了系统而全面的回顾。具体而言,我们首先追溯了LLMs和FL的历史演进,同时总结了相关的先前综述。接着,我们深入分析了部署FedLLM时遇到的基本挑战。随后,我们对现有的参数高效微调(PEFT)方法进行了广泛研究,并探讨了它们在FL中的适用性。此外,我们引入了一个全面的评估基准,以严格评估FedLLM的性能,并讨论了其在多个领域中的多样化实际应用。最后,我们指出了关键的开放挑战,并概述了有前景的研究方向,以推动FedLLM的未来发展。我们维护着一个活跃的\href{https://github.com/Clin0212/Awesome-Federated-LLM-Learning}{GitHub仓库},以跟踪前沿进展。本综述为研究人员和实践者提供了基础性资源,不仅揭示了LLMs联邦微调领域不断发展的图景,也为未来隐私保护人工智能的创新指明了方向。