Fine-tuning plays a crucial role in enabling pre-trained LLMs to evolve from general language comprehension to task-specific expertise. To preserve user data privacy, federated fine-tuning is often employed and has emerged as the de facto paradigm. However, federated fine-tuning is prohibitively inefficient due to the tension between LLM complexity and the resource constraint of end devices, incurring unaffordable fine-tuning overhead. Existing literature primarily utilizes parameter-efficient fine-tuning techniques to mitigate communication costs, yet computational and memory burdens continue to pose significant challenges for developers. This work proposes DropPEFT, an innovative federated PEFT framework that employs a novel stochastic transformer layer dropout method, enabling devices to deactivate a considerable fraction of LLMs layers during training, thereby eliminating the associated computational load and memory footprint. In DropPEFT, a key challenge is the proper configuration of dropout ratios for layers, as overhead and training performance are highly sensitive to this setting. To address this challenge, we adaptively assign optimal dropout-ratio configurations to devices through an exploration-exploitation strategy, achieving efficient and effective fine-tuning. Extensive experiments show that DropPEFT can achieve a 1.3-6.3\times speedup in model convergence and a 40%-67% reduction in memory footprint compared to state-of-the-art methods.
翻译:微调在使预训练大语言模型从通用语言理解能力演进为特定任务专家方面发挥着关键作用。为保护用户数据隐私,联邦微调常被采用并已成为事实上的标准范式。然而,由于大语言模型复杂性与终端设备资源限制之间的矛盾,联邦微调效率极低,产生了难以承受的微调开销。现有研究主要利用参数高效微调技术来降低通信成本,但计算与内存负担仍是开发者面临的重大挑战。本文提出DropPEFT——一种创新的联邦参数高效微调框架,采用新颖的随机Transformer层丢弃方法,使设备在训练期间能停用大语言模型的大量层,从而消除相关的计算负载与内存占用。在DropPEFT中,关键挑战在于如何合理配置各层丢弃率,因为开销与训练性能对此设置高度敏感。为解决该挑战,我们通过探索-利用策略自适应地为设备分配最优丢弃率配置,实现高效且有效的微调。大量实验表明,与最先进方法相比,DropPEFT可实现1.3-6.3倍的模型收敛加速,并减少40%-67%的内存占用。