Post-training compression of large language models (LLMs) largely relies on low-rank weight approximation, which represents each column of a weight matrix in a shared low-dimensional subspace. While this is a computationally efficient strategy, the imposed structural constraint is rigid and can lead to a noticeable model accuracy drop. In this work, we propose CoSpaDi (Compression via Sparse Dictionary Learning), a novel training-free compression framework that replaces low-rank decomposition with a more flexible structured sparse factorization in which each weight matrix is represented with a dense dictionary and a column-sparse coefficient matrix. This formulation enables a union-of-subspaces representation: different columns of the original weight matrix are approximated in distinct subspaces spanned by adaptively selected dictionary atoms, offering greater expressiveness than a single invariant basis. Crucially, CoSpaDi leverages a small calibration dataset to optimize the factorization such that the output activations of compressed projection layers closely match those of the original ones, thereby minimizing functional reconstruction error rather than mere weight approximation. This data-aware strategy preserves better model fidelity without any fine-tuning under reasonable compression ratios. Moreover, the resulting structured sparsity allows efficient sparse-dense matrix multiplication and is compatible with post-training quantization for further memory and latency gains. We evaluate CoSpaDi across multiple Llama and Qwen models under per-layer and per-group settings at 20-50\% compression ratios, demonstrating consistent superiority over state-of-the-art data-aware low-rank methods both in accuracy and perplexity. Our results establish structured sparse dictionary learning as a powerful alternative to conventional low-rank approaches for efficient LLM deployment.
翻译:大语言模型(LLM)的训练后压缩主要依赖于低秩权重近似,即将权重矩阵的每一列表示在一个共享的低维子空间中。尽管这是一种计算高效的策略,但其所施加的结构约束较为刚性,可能导致模型精度显著下降。在本工作中,我们提出了CoSpaDi(基于稀疏字典学习的压缩),这是一种新颖的无训练压缩框架。它用更灵活的结构化稀疏分解替代了低秩分解,其中每个权重矩阵由一个稠密字典和一个列稀疏的系数矩阵表示。这种表述实现了子空间并集表示:原始权重矩阵的不同列在由自适应选择的字典原子张成的不同子空间中进行近似,从而提供了比单一不变基更强的表达能力。关键的是,CoSpaDi利用一个小型校准数据集来优化分解,使得压缩后的投影层的输出激活与原始层的输出激活紧密匹配,从而最小化功能重构误差,而不仅仅是权重近似。这种数据感知策略在合理的压缩比下,无需任何微调即可保持更好的模型保真度。此外,由此产生的结构化稀疏性允许高效的稀疏-稠密矩阵乘法,并且与训练后量化兼容,可进一步获得内存和延迟收益。我们在20-50%的压缩比下,对多个Llama和Qwen模型在逐层和逐组设置中评估了CoSpaDi,结果表明其在准确性和困惑度方面均持续优于最先进的数据感知低秩方法。我们的研究结果确立了结构化稀疏字典学习作为传统低秩方法的一种强大替代方案,可用于高效的大语言模型部署。