Large pre-trained models have exhibited remarkable achievements across various domains. The substantial training costs associated with these models have led to wide studies of fine-tuning for effectively harnessing their capabilities in solving downstream tasks. Yet, conventional fine-tuning approaches become infeasible when the model lacks access to downstream data due to privacy concerns. Naively integrating fine-tuning approaches with the emerging federated learning frameworks incurs substantial communication overhead and exerts high demand on local computing resources, making it impractical for common resource-limited devices. In this paper, we introduce SFPrompt, an innovative privacy-preserving fine-tuning method tailored for the federated setting where direct uploading of raw data is prohibited and local devices are resource-constrained to run a complete pre-trained model. In essence, SFPrompt judiciously combines split learning with federated learning to handle these challenges. Specifically, the pre-trained model is first partitioned into client and server components, thereby streamlining the client-side model and substantially alleviating computational demands on local resources. SFPrompt then introduces soft prompts into the federated model to enhance the fine-tuning performance. To further reduce communication costs, a novel dataset pruning algorithm and a local-loss update strategy are devised during the fine-tuning process. Extensive experiments demonstrate that SFPrompt delivers competitive performance as the federated full fine-tuning approach while consuming a mere 0.46% of local computing resources and incurring 53% less communication cost.
翻译:大型预训练模型在各个领域取得了显著成就。这些模型高昂的训练成本促使学界广泛研究如何通过微调有效利用其解决下游任务的能力。然而,当模型因隐私问题无法访问下游数据时,传统微调方法便不可行。若将微调方法与新兴的联邦学习框架简单结合,会产生巨大的通信开销并对本地计算资源提出过高要求,使其难以在常见的资源受限设备上实施。本文提出SFPrompt,一种专为联邦场景设计的创新隐私保护微调方法,该场景禁止直接上传原始数据,且本地设备资源不足以运行完整的预训练模型。本质上,SFPrompt通过巧妙结合分割学习与联邦学习来应对这些挑战。具体而言,预训练模型首先被分割为客户端和服务器组件,从而精简客户端模型并大幅减轻本地资源的计算负担。随后,SFPrompt在联邦模型中引入软提示以提升微调性能。为进一步降低通信成本,我们在微调过程中设计了新颖的数据集剪枝算法和本地损失更新策略。大量实验表明,SFPrompt在仅消耗0.46%本地计算资源且通信成本降低53%的情况下,实现了与联邦全参数微调方法相竞争的性能。