Large language models (LLMs) have achieved remarkable advancements in natural language processing, showcasing exceptional performance across various tasks. However, the expensive memory and computational requirements present significant challenges for their practical deployment. Low-bit quantization has emerged as a critical approach to mitigate these challenges by reducing the bit-width of model parameters, activations, and gradients, thus decreasing memory usage and computational demands. This paper presents a comprehensive survey of low-bit quantization methods tailored for LLMs, covering the fundamental principles, system implementations, and algorithmic strategies. An overview of basic concepts and new data formats specific to low-bit LLMs is first introduced, followed by a review of frameworks and systems that facilitate low-bit LLMs across various hardware platforms. Then, we categorize and analyze techniques and toolkits for efficient low-bit training and inference of LLMs. Finally, we conclude with a discussion of future trends and potential advancements of low-bit LLMs. Our systematic overview from basic, system, and algorithm perspectives can offer valuable insights and guidelines for future works to enhance the efficiency and applicability of LLMs through low-bit quantization.
翻译:大语言模型(LLMs)在自然语言处理领域取得了显著进展,在各种任务中展现出卓越性能。然而,其高昂的内存与计算需求对其实际部署构成了重大挑战。低比特量化通过降低模型参数、激活值和梯度的位宽,成为缓解这些挑战的关键技术,从而减少内存占用和计算需求。本文针对LLMs的低比特量化方法进行了全面综述,涵盖基本原理、系统实现与算法策略。首先概述低比特LLMs特有的基本概念与新型数据格式,继而回顾支持跨硬件平台低比特LLMs的框架与系统。随后,我们对LLMs高效低比特训练与推理的技术及工具包进行分类分析。最后,通过探讨低比特LLMs的未来趋势与潜在进展进行总结。我们从基础、系统与算法角度展开的系统性综述,可为未来通过低比特量化提升LLMs效率与适用性的研究提供有价值的见解与指导。