Supervised fine-tuning is the most common method to adapt large language models (LLMs) to downstream tasks, but full fine-tuning LLMs requires massive computational resources. Recently, parameter-efficient fine-tuning (PEFT) methods have been widely studied due to its cost-effectiveness. LoRA is one of the most widely used methods, which assumes that the optimization process is essentially low-dimensional. Although LoRA fine-tuning is effective, there is still a performance gap compared to full fine-tuning, since its weight update is limited to low-rank matrices. In order to break the low-rank bottleneck in LoRA Optimization, we propose PeriodicLoRA (PLoRA), which accumulates low-rank update matrices multiple times to achieve a higher update rank. PLoRA has multiple training stages. During each stage, we still update only the LoRA weights. However, at the end of each stage, we unload the LoRA weights into the backbone parameters and then reinitialize the LoRA states. Experimental results show that PLoRA has stronger learning ability, approximately 1.8 times that of LoRA's learning ability at most, but it does not increase memory usage. Further, we introduce a momentum-based unloading strategy for PLoRA to mitigate the training instability.
翻译:监督微调是将大型语言模型(LLMs)适配到下游任务最常用的方法,但全参数微调LLMs需要大量计算资源。最近,参数高效微调(PEFT)方法因其成本效益而受到广泛研究。LoRA是其中应用最广泛的方法之一,其假设优化过程本质上是低维的。虽然LoRA微调有效,但权重更新受限于低秩矩阵,导致与全参数微调仍存在性能差距。为了打破LoRA优化中的低秩瓶颈,我们提出PeriodicLoRA(PLoRA),该方法通过多次累积低秩更新矩阵来实现更高的更新秩。PLoRA包含多个训练阶段,每个阶段仅更新LoRA权重,但在每个阶段结束时将LoRA权重卸载到骨干参数中并重新初始化LoRA状态。实验结果表明,PLoRA具有更强的学习能力,最高可达LoRA学习能力的约1.8倍,且不增加内存使用。此外,我们为PLoRA引入了基于动量的卸载策略以缓解训练不稳定性。