Pre-trained Language Models (PLMs) have shown excellent performance on various downstream tasks after fine-tuning. Nevertheless, the escalating concerns surrounding user privacy have posed significant challenges to centralized training reliant on extensive data collection. Federated learning(FL), which only requires training on the clients and aggregates weights on the server without sharing data, has emerged as a solution. However, the substantial parameter size of PLMs places a significant burden on the computational resources of client devices, while also leading to costly communication expenses. Introducing Parameter-Efficient Fine-Tuning(PEFT) into FL can effectively address this problem. However, we observe that the non-IID data in federated learning leads to a gap in performance between the PEFT method and full parameter fine-tuning(FT). To overcome this, we propose FeDeRA, an improvement over the LoRA method in FL. FeDeRA uses the same adapter module as LoRA. However, the difference lies in FeDeRA's initialization of the adapter module by performing Singular Value Decomposition (SVD) on the pre-trained matrix and selecting its principal components. We conducted extensive experiments, using RoBERTa and DeBERTaV3, on three tasks and six datasets, comparing the methods including FT and the other three different PEFT methods. FeDeRA outperforms all other PEFT methods and is comparable to or even surpasses the performance of FT methods. We also deployed federated learning on Jetson AGX Orin and compared the time required by different methods to achieve the target accuracy on specific tasks. Compared to FT, FeDeRA reduces the training time by 95.9%, 97.9%, 96.9%, and 97.3%, 96.5%, and 96.5% respectively on three tasks using RoBERTa and DeBERTaV3. The overall experiments indicate that FeDeRA achieves good performance while also maintaining efficiency.
翻译:摘要:预训练语言模型(PLMs)在微调后已在各类下游任务中展现出卓越性能。然而,用户隐私问题日益突出,给依赖大规模数据收集的集中式训练带来了重大挑战。联邦学习(FL)仅需在客户端进行训练并在服务器端聚合权重,无需共享数据,已成为一种解决方案。但PLMs庞大的参数量给客户端设备的计算资源造成显著负担,同时导致高昂的通信开销。将参数高效微调(PEFT)引入FL可有效缓解此问题。但我们观察到,联邦学习中的非独立同分布数据会导致PEFT方法与全参数微调(FT)之间存在性能差距。为克服这一缺陷,我们提出FeDeRA——对FL中LoRA方法的改进。FeDeRA采用与LoRA相同的适配器模块,其差异在于:通过将预训练矩阵进行奇异值分解(SVD)并选取其主成分来初始化适配器模块。我们使用RoBERTa和DeBERTaV3模型,在三个任务和六个数据集上进行了广泛实验,对比了FT及其他三种PEFT方法。FeDeRA在所有PEFT方法中表现最优,性能与FT方法相当甚至更优。我们还在Jetson AGX Orin上部署了联邦学习,比较了不同方法在特定任务上达到目标精度所需的时间。相较于FT,FeDeRA在使用RoBERTa和DeBERTaV3的三个任务上分别将训练时间减少了95.9%、97.9%、96.9%以及97.3%、96.5%、96.5%。整体实验表明,FeDeRA在保持高效率的同时也实现了良好性能。