Low-rank adaptation (LoRA) approximates the update of a pretrained weight matrix using the product of two low-rank matrices. However, standard LoRA follows an explicit-rank paradigm, where increasing model capacity requires adding more rows or columns (i.e., basis vectors) to the low-rank matrices, leading to substantial parameter growth. In this paper, we find that these basis vectors exhibit significant parameter redundancy and can be compactly represented by lightweight nonlinear functions. Therefore, we propose Generative Low-Rank Adapter (GenLoRA), which replaces explicit basis vector storage with nonlinear basis vector generation. Specifically, GenLoRA maintains a latent vector for each low-rank matrix and employs a set of lightweight radial basis functions (RBFs) to synthesize the basis vectors. Each RBF requires far fewer parameters than an explicit basis vector, enabling higher parameter efficiency in GenLoRA. Extensive experiments across multiple datasets and architectures show that GenLoRA attains higher effective LoRA ranks under smaller parameter budgets, resulting in superior fine-tuning performance. The code is available at https://anonymous.4open.science/r/GenLoRA-1519.
翻译:低秩适配(LoRA)通过两个低秩矩阵的乘积来近似预训练权重矩阵的更新。然而,标准LoRA遵循显式秩范式,增加模型容量需要向低秩矩阵添加更多行或列(即基向量),导致参数量大幅增长。本文发现这些基向量存在显著的参数冗余,可以通过轻量级非线性函数紧凑地表示。因此,我们提出生成式低秩适配器(GenLoRA),用非线性基向量生成替代显式基向量存储。具体而言,GenLoRA为每个低秩矩阵维护一个潜在向量,并采用一组轻量级径向基函数(RBFs)来合成基向量。每个RBF所需的参数量远少于一个显式基向量,从而使GenLoRA具有更高的参数效率。在多个数据集和架构上的大量实验表明,GenLoRA在更小的参数量预算下实现了更高的有效LoRA秩,从而获得了更优的微调性能。代码发布于 https://anonymous.4open.science/r/GenLoRA-1519。