Parameter-efficient training based on low-rank optimization has become a highly successful tool for fine-tuning large deep learning models. However, these methods often fail for low-rank pre-training, where simultaneously maintaining low-rank weight structure and optimizing the task objective remains challenging. We propose the $\textit{Quadratic Reweighted Rank Regularizer}$ ($\texttt{Q3R}$), which leads to a novel low-rank-inducing training strategy inspired by the Iteratively Reweighted Least Squares (IRLS) framework. $\texttt{Q3R}$ is based on a quadratic regularizer term that majorizes a smoothed log-determinant rank surrogate. Unlike other low-rank training techniques, $\texttt{Q3R}$ can train weight matrices to prescribed low target ranks while achieving predictive performance comparable to dense models, with small computational overhead and full compatibility with existing architectures. For example, we demonstrate a $\texttt{Q3R}$-regularized ViT-Tiny experiment where truncating the model to $60\%$ and $80\%$ of its parameters results in only minor absolute accuracy drops of $1.3\%$ and $4\%$, respectively, on CIFAR-10. We confirm the efficacy of $\texttt{Q3R}$ on Transformers across both vision and language tasks, including low-rank fine-tuning.
翻译:基于低秩优化的参数高效训练已成为微调大型深度学习模型的极其成功的工具。然而,这些方法在低秩预训练中常常失效,因为同时保持低秩权重结构和优化任务目标仍然具有挑战性。我们提出了$\textit{二次重加权秩正则化器}$($\texttt{Q3R}$),它受迭代重加权最小二乘(IRLS)框架启发,形成了一种新颖的低秩诱导训练策略。$\texttt{Q3R}$基于一个二次正则化项,该正则化项主控了一个平滑的对数行列式秩代理。与其他低秩训练技术不同,$\texttt{Q3R}$能够将权重矩阵训练到预设的低目标秩,同时获得与密集模型相当的预测性能,且计算开销小,并与现有架构完全兼容。例如,我们展示了一个$\texttt{Q3R}$正则化的ViT-Tiny实验,将模型参数截断至其原始参数的$60\%$和$80\%$时,在CIFAR-10数据集上仅分别导致$1.3\%$和$4\%$的微小绝对精度下降。我们在视觉和语言任务(包括低秩微调)上,通过Transformer模型验证了$\texttt{Q3R}$的有效性。