Previous research has shown that the principal singular vectors of a pre-trained model's weight matrices capture critical knowledge. In contrast, those associated with small singular values may contain noise or less reliable information. As a result, the LoRA-based parameter-efficient fine-tuning (PEFT) approach, which does not constrain the use of the spectral space, may not be effective for tasks that demand high representation capacity. In this study, we enhance existing PEFT techniques by incorporating the spectral information of pre-trained weight matrices into the fine-tuning process. We investigate spectral adaptation strategies with a particular focus on the additive adjustment of top singular vectors. This is accomplished by applying singular value decomposition (SVD) to the pre-trained weight matrices and restricting the fine-tuning within the top spectral space. Extensive speaker verification experiments on VoxCeleb1 and CN-Celeb1 demonstrate enhanced tuning performance with the proposed approach. Code is released at https://github.com/lizhepolyu/SpectralFT.
翻译:先前研究表明,预训练模型权重矩阵的主奇异向量能够捕获关键知识。相比之下,与较小奇异值相关的向量可能包含噪声或可靠性较低的信息。因此,基于LoRA的参数高效微调方法由于未对谱空间的使用施加约束,可能不适用于需要高表征能力的任务。本研究通过将预训练权重矩阵的谱信息融入微调过程,对现有PEFT技术进行了改进。我们重点研究了基于顶部奇异向量加性调整的谱自适应策略,通过对预训练权重矩阵进行奇异值分解,并将微调过程限制在顶部谱空间内实现。在VoxCeleb1和CN-Celeb1数据集上进行的大规模说话人验证实验表明,所提方法能有效提升微调性能。代码发布于https://github.com/lizhepolyu/SpectralFT。