Modern large language models (LLMs) have established state-of-the-art performance through architectural improvements, but still require significant computational cost for inference. In an effort to reduce the inference cost, post-training quantization (PTQ) has become a popular approach, quantizing weights and activations to lower precision, such as INT8. In this paper, we reveal the challenges of activation quantization in GLU variants, which are widely used in feed-forward network (FFN) of modern LLMs, such as LLaMA family. The problem is that severe local quantization errors, caused by excessive magnitudes of activation in GLU variants, significantly degrade the performance of the quantized LLM. We denote these activations as activation spikes. Our further observations provide a systematic pattern of activation spikes: 1) The activation spikes occur in the FFN of specific layers, particularly in the early and late layers, 2) The activation spikes are dedicated to a couple of tokens, rather than being shared across a sequence. Based on our observations, we propose two empirical methods, Quantization-free Module (QFeM) and Quantization-free Prefix (QFeP), to isolate the activation spikes during quantization. Our extensive experiments validate the effectiveness of the proposed methods for the activation quantization, especially with coarse-grained scheme, of latest LLMs with GLU variants, including LLaMA-2/3, Mistral, Mixtral, SOLAR, and Gemma. In particular, our methods enhance the current alleviation techniques (e.g., SmoothQuant) that fail to control the activation spikes. Code is available at https://github.com/onnoo/activation-spikes.
翻译:现代大型语言模型(LLM)通过架构改进确立了最先进的性能,但其推理过程仍需高昂的计算成本。为降低推理成本,训练后量化(PTQ)已成为一种流行方法,可将权重和激活量化为较低精度(如INT8)。本文揭示了在GLU变体(广泛用于现代LLM如前馈网络(FFN)中的LLaMA系列)中进行激活量化的挑战。问题在于,GLU变体中激活值幅值过大导致的严重局部量化误差,会显著降低量化后LLM的性能。我们将这些激活值称为激活尖峰。进一步的观察揭示了激活尖峰的系统性规律:1)激活尖峰出现在特定层(尤其是早期和晚期层)的FFN中;2)激活尖峰仅针对少数特定token产生,而非在整个序列中共享。基于这些观察,我们提出了两种经验性方法——免量化模块(QFeM)和免量化前缀(QFeP),以在量化过程中隔离激活尖峰。大量实验验证了所提方法在激活量化(尤其是粗粒度方案)中的有效性,测试对象包括采用GLU变体的最新LLM(如LLaMA-2/3、Mistral、Mixtral、SOLAR和Gemma)。特别值得注意的是,我们的方法增强了现有缓解技术(如SmoothQuant)对激活尖峰的控制能力。代码发布于https://github.com/onnoo/activation-spikes。