Large Language Models (LLMs) have revolutionized intelligent services by enabling logical reasoning, tool use, and interaction with external systems as agents. The advancement of LLMs is frequently hindered by the scarcity of high-quality data, much of which is inherently sensitive. Federated learning (FL) offers a potential solution by facilitating the collaborative training of distributed LLMs while safeguarding private data. However, FL frameworks face significant bandwidth and computational demands, along with challenges from heterogeneous data distributions. The emerging in-context learning capability of LLMs offers a promising approach by aggregating natural language rather than bulky model parameters. Yet, this method risks privacy leakage, as it necessitates the collection and presentation of data samples from various clients during aggregation. In this paper, we propose a novel privacy-preserving Federated In-Context LLM Agent Learning (FICAL) algorithm, which to our best knowledge for the first work unleashes the power of in-context learning to train diverse LLM agents through FL. In our design, knowledge compendiums generated by a novel LLM-enhanced Knowledge Compendiums Generation (KCG) module are transmitted between clients and the server instead of model parameters in previous FL methods. Apart from that, an incredible Retrieval Augmented Generation (RAG) based Tool Learning and Utilizing (TLU) module is designed and we incorporate the aggregated global knowledge compendium as a teacher to teach LLM agents the usage of tools. We conducted extensive experiments and the results show that FICAL has competitive performance compared to other SOTA baselines with a significant communication cost decrease of $\mathbf{3.33\times10^5}$ times.
翻译:大语言模型(LLMs)通过实现逻辑推理、工具使用以及与外部系统的智能体交互,彻底变革了智能服务领域。然而,LLMs的发展常受限于高质量数据的稀缺性,且此类数据大多具有固有的敏感性。联邦学习(FL)提供了一种潜在的解决方案,它能够在保护私有数据的同时,促进分布式LLMs的协同训练。然而,FL框架面临着显著的带宽与计算需求,以及异构数据分布带来的挑战。LLMs新兴的情境学习能力提供了一种前景广阔的方法,即聚合自然语言而非庞大的模型参数。但这种方法存在隐私泄露风险,因为在聚合过程中需要收集并呈现来自不同客户端的数据样本。本文提出了一种新颖的隐私保护联邦情境大语言模型智能体学习(FICAL)算法,据我们所知,这是首个利用情境学习能力通过联邦学习训练多样化LLM智能体的工作。在我们的设计中,由新型LLM增强的知识摘要生成(KCG)模块产生的知识摘要,在客户端与服务器之间传输,替代了传统FL方法中的模型参数。此外,我们设计了一个基于检索增强生成(RAG)的卓越工具学习与利用(TLU)模块,并将聚合后的全局知识摘要作为“教师”,指导LLM智能体学习工具的使用方法。我们进行了大量实验,结果表明,与其他最先进的基线方法相比,FICAL具有竞争力的性能,同时通信成本显著降低了 $\mathbf{3.33\times10^5}$ 倍。