In recent years, Parameter-Efficient Fine-Tuning (PEFT) methods like Low-Rank Adaptation (LoRA) have significantly enhanced the adaptability of large-scale pre-trained models. Weight-Decomposed Low-Rank Adaptation (DoRA) improves upon LoRA by separating the magnitude and direction components of the weight matrix, leading to superior performance. However, DoRA's improvements are limited to the vertical dimension, resulting in an asymmetrical pattern between horizontal and vertical dimensions. This paper introduces BoRA, an innovative extension of LoRA and DoRA, characterized by symmetrical properties across horizontal and vertical dimensions. Our approach optimizes the weight matrix symmetrically by adjusting both column-wise and row-wise magnitudes. Extensive experiments demonstrate that BoRA surpasses state-of-the-art PEFT methods, including LoRA and DoRA, achieving superior results across various benchmarks.
翻译:近年来,参数高效微调(PEFT)方法,如低秩适应(LoRA),显著提升了大规模预训练模型的适应能力。权重分解的低秩适应(DoRA)通过分离权重矩阵的幅度和方向分量,在LoRA基础上实现了性能提升。然而,DoRA的改进仅限于垂直维度,导致水平与垂直维度间存在不对称模式。本文提出BoRA,一种对LoRA和DoRA的创新性扩展,其特点在于水平与垂直维度上的对称性。我们的方法通过同时调整列向和行向幅度,对称地优化权重矩阵。大量实验表明,BoRA超越了包括LoRA和DoRA在内的先进PEFT方法,在多种基准测试中取得了更优的结果。