With the rapid growth in the scale and complexity of large language models (LLMs), the costs of training and inference have risen substantially. Model compression has emerged as a mainstream solution to reduce memory usage and computational overhead. This paper presents Group Quantization and Sparse Acceleration (\textbf{GQSA}), a novel compression technique tailored for LLMs. Traditional methods typically focus exclusively on either quantization or sparsification, but relying on a single strategy often results in significant performance loss at high compression rates. In contrast, GQSA integrates quantization and sparsification in a tightly coupled manner, leveraging GPU-friendly structured group sparsity and quantization for efficient acceleration. The proposed method consists of three key steps. First, GQSA applies group structured pruning to adhere to GPU-friendly sparse pattern constraints. Second, a two-stage sparsity-aware training process is employed to maximize performance retention after compression. Finally, the framework adopts the Block Sparse Row (BSR) format to enable practical deployment and efficient execution. Experimental results on the LLaMA model family show that GQSA achieves an excellent balance between model speed and accuracy. Furthermore, on the latest LLaMA-3 and LLaMA-3.1 models, GQSA outperforms existing LLM compression techniques significantly.
翻译:随着大语言模型(LLM)规模和复杂性的快速增长,其训练与推理成本显著上升。模型压缩已成为降低内存占用和计算开销的主流解决方案。本文提出分组量化与稀疏加速(GQSA),一种专为大语言模型设计的新型压缩技术。传统方法通常仅专注于量化或稀疏化中的单一策略,但在高压缩率下往往导致显著的性能损失。相比之下,GQSA以紧耦合的方式整合量化与稀疏化,利用GPU友好的结构化分组稀疏与量化实现高效加速。所提方法包含三个关键步骤:首先,GQSA应用分组结构化剪枝以满足GPU友好的稀疏模式约束;其次,采用两阶段稀疏感知训练流程以最大化压缩后的性能保留;最后,该框架采用块稀疏行(BSR)格式以实现实际部署与高效执行。在LLaMA模型系列上的实验结果表明,GQSA在模型速度与精度之间取得了优异平衡。此外,在最新的LLaMA-3与LLaMA-3.1模型上,GQSA显著优于现有的大语言模型压缩技术。