The advancements in Large Language Models (LLMs) have been hindered by their substantial sizes, which necessitate LLM compression methods for practical deployment. Singular Value Decomposition (SVD) offers a promising solution for LLM compression. However, state-of-the-art SVD-based LLM compression methods have two key limitations: truncating smaller singular values may lead to higher compression loss, and the lack of update on the remaining model parameters after SVD truncation. In this work, we propose SVD-LLM, a new SVD-based LLM compression method that addresses the limitations of existing methods. SVD-LLM incorporates a truncation-aware data whitening strategy to ensure a direct mapping between singular values and compression loss. Moreover, SVD-LLM adopts a layer-wise closed-form model parameter update strategy to compensate for accuracy degradation caused by SVD truncation. We evaluate SVD-LLM on a total of 11 datasets and seven models from three different LLM families at four different scales. Our results demonstrate the superiority of SVD-LLM over state-of-the-arts, especially at high model compression ratios. The source code is available at https://github.com/AIoT-MLSys-Lab/SVD-LLM.
翻译:大语言模型(LLM)的进步因其庞大的模型规模而受到制约,这需要在实际部署中采用LLM压缩方法。奇异值分解(SVD)为LLM压缩提供了一种有前景的解决方案。然而,现有最先进的基于SVD的LLM压缩方法存在两个关键局限:截断较小奇异值可能导致较高的压缩损失,以及SVD截断后缺乏对剩余模型参数的更新。本文提出SVD-LLM——一种新的基于SVD的LLM压缩方法,解决了现有方法的局限性。SVD-LLM引入了一种截断感知的数据白化策略,确保奇异值与压缩损失之间的直接映射关系。此外,SVD-LLM采用逐层闭式模型参数更新策略,以补偿SVD截断导致的精度下降。我们在来自三个不同LLM家族的七种模型、共11个数据集上,以四种不同规模对SVD-LLM进行了评估。结果表明,SVD-LLM在性能上超越现有最先进方法,尤其在高压缩比场景下优势显著。源代码已开源至https://github.com/AIoT-MLSys-Lab/SVD-LLM。