Low-rank decomposition, particularly Singular Value Decomposition (SVD), is a pivotal technique for mitigating the storage and computational demands of Large Language Models (LLMs). However, prevalent SVD-based approaches overlook the critical phenomenon that decomposition errors exhibit significant disparity across different components of the parameter matrix, often leading to suboptimal approximation. Furthermore, existing methods lack a direct metric to evaluate the importance of individual weight matrices. To address these limitations, we propose Duo-SVD (Dual-level Optimization SVD), a novel training-free framework that synergizes optimization at both the column and the module levels. First, Duo-SVD incorporates a Column-Preserving Strategy that explicitly retains columns exhibiting high decomposition errors, while applying low-rank approximation solely to those with lower errors. Second, at the module level, we employ a Module-Adaptive Allocation Strategy that formulates ratio allocation as a global constrained optimization problem based on perturbation-induced model deviation. Extensive experiments demonstrate that Duo-SVD consistently outperforms state-of-the-art SVD-based baselines and structured pruning methods, establishing it as a superior paradigm for efficient LLM compression.
翻译:低秩分解,特别是奇异值分解(SVD),是缓解大语言模型存储与计算需求的关键技术。然而,当前主流的基于SVD的方法忽略了一个关键现象:参数矩阵不同组成部分的分解误差存在显著差异,这常常导致次优的近似效果。此外,现有方法缺乏直接评估单个权重矩阵重要性的度量指标。为应对这些局限,我们提出了Duo-SVD(双重优化SVD),一种新颖的无训练框架,该框架协同优化了列级与模块级两个层面。首先,Duo-SVD引入了一种列保留策略,该策略显式地保留那些表现出高分解误差的列,而仅对误差较低的列应用低秩近似。其次,在模块层面,我们采用了一种模块自适应分配策略,该策略基于扰动引起的模型偏差,将比率分配问题构建为一个全局约束优化问题。大量实验表明,Duo-SVD在性能上持续优于最先进的基于SVD的基线方法及结构化剪枝方法,确立了其作为高效LLM压缩的优越范式。