Recent research in federated large language models (LLMs) has primarily focused on enabling clients to fine-tune their locally deployed homogeneous LLMs collaboratively or on transferring knowledge from server-based LLMs to small language models (SLMs) at downstream clients. However, a significant gap remains in the simultaneous mutual enhancement of both the server's LLM and clients' SLMs. To bridge this gap, we propose FedMKT, a parameter-efficient federated mutual knowledge transfer framework for large and small language models. This framework is designed to adaptively transfer knowledge from the server's LLM to clients' SLMs while concurrently enriching the LLM with clients' unique domain insights. We facilitate token alignment using minimum edit distance (MinED) and then selective mutual knowledge transfer between client-side SLMs and a server-side LLM, aiming to collectively enhance their performance. Through extensive experiments across three distinct scenarios, we evaluate the effectiveness of FedMKT using various public LLMs and SLMs on a range of NLP text generation tasks. Empirical results demonstrate that FedMKT simultaneously boosts the performance of both LLMs and SLMs.
翻译:近期关于联邦大语言模型(LLM)的研究主要聚焦于两个方向:一是使客户端能够协同微调其本地部署的同构LLM,二是将基于服务器的LLM知识迁移至下游客户端的小语言模型(SLM)。然而,在服务器端LLM与客户端SLM的同步相互增强方面仍存在显著空白。为填补这一空白,本文提出FedMKT——一种面向大语言模型与小语言模型的参数高效联邦互知识迁移框架。该框架旨在自适应地将服务器端LLM的知识迁移至客户端SLM,同时利用客户端特有的领域知识增强LLM。我们采用最小编辑距离(MinED)实现词元对齐,进而在客户端SLM与服务器端LLM之间进行选择性互知识迁移,以协同提升双方性能。通过在三种不同场景下的广泛实验,我们使用多种公开LLM和SLM在一系列NLP文本生成任务上评估了FedMKT的有效性。实证结果表明,FedMKT能够同步提升LLM与SLM的性能。