Large language models (LLMs) have recently demonstrated remarkable performance across diverse language tasks. But their deployment is often constrained by their substantial computational and storage requirements. Quantization has emerged as a key technique for addressing this challenge, enabling the compression of large models with minimal impact on performance. The recent GPTQ algorithm, a post-training quantization (PTQ) method, has proven highly effective for compressing LLMs, sparking a wave of research that leverages GPTQ as a core component. Recognizing the pivotal role of GPTQ in the PTQ landscape, we introduce CDQuant, a simple and scalable alternative to GPTQ with improved performance. CDQuant uses coordinate descent to minimize the layer-wise reconstruction loss to achieve high-quality quantized weights. Our algorithm is easy to implement and scales efficiently to models with hundreds of billions of parameters. Through extensive evaluation on the PaLM2 model family, we demonstrate that CDQuant consistently outperforms GPTQ across diverse model sizes and quantization levels. In particular, for INT2 quantization of PaLM2-Otter, CDQuant achieves a 10% reduction in perplexity compared to GPTQ.
翻译:大型语言模型(LLMs)近期在各种语言任务中展现出卓越性能,但其部署常受限于巨大的计算与存储需求。量化已成为应对这一挑战的关键技术,能够在最小化性能损失的前提下压缩大型模型。近期提出的GPTQ算法作为一种训练后量化(PTQ)方法,在压缩LLMs方面表现出显著效果,并引发了一系列以GPTQ为核心组件的研究浪潮。鉴于GPTQ在PTQ领域的关键作用,本文提出CDQuant——一种性能更优、简洁且可扩展的GPTQ替代方案。CDQuant采用坐标下降法最小化逐层重构损失,从而获得高质量的量化权重。该算法易于实现,并能高效扩展至具有数千亿参数的模型。通过对PaLM2模型系列的广泛评估,我们证明CDQuant在不同模型规模和量化级别上均持续优于GPTQ。特别是在PaLM2-Otter的INT2量化任务中,CDQuant相比GPTQ实现了困惑度降低10%的显著提升。