We have observed a distinctive quantization-related behavior in the LLaMA3/3.1-70B models that is absent in both the LLaMA2-70B and LLaMA3/3.1/3.2-1B/3B/8B/405B models. Quantization is a crucial technique for deploying large language models (LLMs) efficiently. The impact of W8A8 post-training quantization on model accuracy, especially on the recently released LLaMA3/3.1 model series, remains contentious. In this paper, we explore three key questions: What makes the LLaMA3-70B model series uniquely vulnerable to quantization? Why is this the case? And how can the issue be addressed? We empirically investigate multiple LLMs featured on an open LLM leaderboard, discovering that the LLaMA3-70B model series have a unique accuracy degradation behavior with W8A8 per-channel post-training quantization. In contrast, other model series such as LLaMA2, LLaMA3/3.1-8B, LLaMA3.2, Qwen, Mixtral, Mistral, Phi-3, and Falcon demonstrate robust performance with W8A8. Contrary to previous assertions attributing degradation to the large dynamic range of activations, our findings indicate that the weight distribution of the LLaMA3-70B is the primary factor behind the vulnerability. By meticulously analyzing the distinct characteristics of weight distributions across Transformer blocks, we propose two solutions that make different tradeoffs in hardware/software overhead. First, we propose a mixed strategy where less than 3\% of the layers employ finer per-group W8A8 quantization granularity. Second, we introduce a bi-smoothing strategy that balances quantization errors between weights and activations while maintaining per-channel quantization throughout. Experimental results demonstrate that both strategies effectively preserve the accuracy of the entire LLaMA3-70B model series under W8A8 quantization, achieving performance on par with their FP16 counterparts.
翻译:我们观察到,在LLaMA3/3.1-70B模型中存在一种独特的量化相关行为,这种行为在LLaMA2-70B以及LLaMA3/3.1/3.2-1B/3B/8B/405B模型中均未出现。量化是高效部署大语言模型(LLMs)的一项关键技术。W8A8训练后量化对模型精度的影响,尤其是在最新发布的LLaMA3/3.1模型系列上,仍然存在争议。本文探讨了三个关键问题:是什么使得LLaMA3-70B模型系列对量化特别敏感?其原因何在?以及如何解决该问题?我们通过实验研究了多个在公开LLM排行榜上表现优异的模型,发现LLaMA3-70B模型系列在采用W8A8逐通道训练后量化时,表现出独特的精度下降行为。相比之下,其他模型系列如LLaMA2、LLaMA3/3.1-8B、LLaMA3.2、Qwen、Mixtral、Mistral、Phi-3和Falcon在W8A8量化下均表现出鲁棒的性能。与先前将性能下降归因于激活值动态范围大的论断相反,我们的研究结果表明,LLaMA3-70B的权重分布是其量化脆弱性的主要因素。通过细致分析Transformer各模块中权重分布的独特特征,我们提出了两种在硬件/软件开销上做出不同权衡的解决方案。首先,我们提出一种混合策略,其中不到3%的层采用更精细的逐组W8A8量化粒度。其次,我们引入一种双平滑策略,该策略在保持全模型逐通道量化的同时,平衡了权重和激活值之间的量化误差。实验结果表明,这两种策略均能有效保持整个LLaMA3-70B模型系列在W8A8量化下的精度,使其性能达到与FP16版本相当的水平。