LoRA has become one of the most widely used parameter-efficient fine-tuning methods due to its simplicity and effectiveness. However, numerous studies have shown that LoRA often introduces substantial parameter redundancy, which not only increases the number of trainable parameters but also hinders the effectiveness of fine-tuning. Since identifying redundant parameters in LoRA is inherently difficult, how to eliminate them efficiently and accurately remains a challenging problem. In this paper, we propose TASO, a redundancy reduction method that leverages importance information from the pretrained model's weights to mitigate LoRA redundancy. Specifically, we estimate parameter importance on downstream tasks and identify task-specific core regions based on the distribution of importance scores. The location information of these core regions is then used to determine the sparse structure of LoRA modules, enabling redundancy removal before fine-tuning. Our approach significantly reduces the number of trainable parameters required for task adaptation, while providing a novel task-aligned perspective for LoRA redundancy reduction. Experimental results demonstrate that, with a parameter budget comparable to LoRA with rank $r = 1$, TASO consistently outperforms standard LoRA across multiple tasks, achieving strong fine-tuning performance while effectively eliminating redundant parameters.
翻译:LoRA因其简洁性与有效性,已成为应用最广泛的参数高效微调方法之一。然而,大量研究表明,LoRA通常会引入显著的参数冗余,这不仅增加了可训练参数量,还阻碍了微调效果。由于识别LoRA中的冗余参数本身具有难度,如何高效且准确地消除这些冗余仍是一个具有挑战性的问题。本文提出TASO,一种利用预训练模型权重的重要性信息来缓解LoRA冗余的冗余削减方法。具体而言,我们在下游任务上估计参数重要性,并根据重要性分数的分布识别任务特定的核心区域。这些核心区域的位置信息随后被用于确定LoRA模块的稀疏结构,从而在微调前实现冗余移除。我们的方法显著减少了任务适配所需的可训练参数量,同时为LoRA冗余削减提供了一个新颖的任务对齐视角。实验结果表明,在参数量预算与秩$r = 1$的LoRA相当的条件下,TASO在多个任务上持续优于标准LoRA,在有效消除冗余参数的同时实现了强劲的微调性能。