Low-Rank Adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning foundation models by re-parameterizing the original matrix into the product of two low-rank matrices. Despite its efficiency, LoRA often yields inferior performance compared to full fine-tuning. In this paper, we propose LoRA-Pro to bridge this performance gap. Firstly, we delve into the optimization processes in LoRA and full fine-tuning. We reveal that while LoRA employs low-rank approximation, it neglects to approximate the optimization process of full fine-tuning. To address this, we introduce a novel concept called the "equivalent gradient." This virtual gradient makes the optimization process on the re-parameterized matrix equivalent to LoRA, which can be used to quantify the differences between LoRA and full fine-tuning. The equivalent gradient is derived from the gradients of matrices $A$ and $B$. To narrow the performance gap, our approach minimizes the differences between the equivalent gradient and the gradient obtained from full fine-tuning during the optimization process. By solving this objective, we derive optimal closed-form solutions for updating matrices $A$ and $B$. Our method constrains the optimization process, shrinking the performance gap between LoRA and full fine-tuning. Extensive experiments on natural language processing tasks validate the effectiveness of our method.
翻译:低秩适配(Low-Rank Adaptation,简称LoRA)作为一种参数高效的微调方法,通过将原始矩阵重参数化为两个低秩矩阵的乘积,已成为微调基础模型的重要技术。尽管LoRA具有高效性,但其性能通常仍逊色于全参数微调。本文提出LoRA-Pro方法,旨在弥合这一性能差距。首先,我们深入分析了LoRA与全参数微调中的优化过程。研究发现,虽然LoRA采用了低秩近似,却未能充分近似全参数微调的优化过程。针对这一问题,我们引入了一个称为“等效梯度”的新概念。该虚拟梯度使得在重参数化矩阵上的优化过程与LoRA等效,可用于量化LoRA与全参数微调之间的差异。等效梯度由矩阵$A$和$B$的梯度推导得出。为缩小性能差距,我们的方法在优化过程中最小化等效梯度与全参数微调所得梯度之间的差异。通过求解该目标,我们推导出更新矩阵$A$和$B$的最优闭式解。本方法通过约束优化过程,有效缩小了LoRA与全参数微调之间的性能差距。在自然语言处理任务上的大量实验验证了本方法的有效性。