Conventional federated learning primarily aims to secure the privacy of data distributed across multiple edge devices, with the global model dispatched to edge devices for parameter updates during the learning process. However, the development of large language models (LLMs) requires substantial data and computational resources, rendering them valuable intellectual properties for their developers and owners. To establish a mechanism that protects both data and model privacy in a federated learning context, we introduce a method that just needs to distribute a quantized version of the model's parameters during training. This method enables accurate gradient estimations for parameter updates while preventing clients from accessing a model whose performance is comparable to the centrally hosted one. Moreover, we combine this quantization strategy with LoRA, a popular and parameter-efficient fine-tuning method, to significantly reduce communication costs in federated learning. The proposed framework, named \textsc{FedLPP}, successfully ensures both data and model privacy in the federated learning context. Additionally, the learned central model exhibits good generalization and can be trained in a resource-efficient manner.
翻译:传统的联邦学习主要致力于保护分布在多个边缘设备上的数据隐私,其学习过程中将全局模型分发至边缘设备进行参数更新。然而,大型语言模型(LLMs)的开发需要大量数据与计算资源,使其成为开发者与所有者宝贵的知识产权资产。为建立一种在联邦学习环境中同时保护数据与模型隐私的机制,我们提出一种方法,该方法在训练过程中仅需分发模型参数的量化版本。此方法能够为参数更新提供准确的梯度估计,同时防止客户端访问性能与中心托管模型相当的模型。此外,我们将该量化策略与LoRA(一种流行且参数高效的精调方法)相结合,以显著降低联邦学习中的通信开销。所提出的框架命名为 \textsc{FedLPP},成功确保了联邦学习场景下的数据与模型隐私。此外,学习得到的中心模型展现出良好的泛化能力,并能够以资源高效的方式进行训练。