Quantization of large language models (LLMs) faces significant challenges, particularly due to the presence of outlier activations that impede efficient low-bit representation. Traditional approaches predominantly address Normal Outliers, which are activations across all tokens with relatively large magnitudes. However, these methods struggle with smoothing Massive Outliers that display significantly larger values, which leads to significant performance degradation in low-bit quantization. In this paper, we introduce DuQuant, a novel approach that utilizes rotation and permutation transformations to more effectively mitigate both massive and normal outliers. First, DuQuant starts by constructing the rotation matrix, using specific outlier dimensions as prior knowledge, to redistribute outliers to adjacent channels by block-wise rotation. Second, We further employ a zigzag permutation to balance the distribution of outliers across blocks, thereby reducing block-wise variance. A subsequent rotation further smooths the activation landscape, enhancing model performance. DuQuant simplifies the quantization process and excels in managing outliers, outperforming the state-of-the-art baselines across various sizes and types of LLMs on multiple tasks, even with 4-bit weight-activation quantization. Our code is available at https://github.com/Hsu1023/DuQuant.
翻译:大语言模型(LLM)的量化面临重大挑战,特别是由于异常值激活的存在阻碍了高效的低比特表示。传统方法主要处理**常规异常值**,即所有令牌中幅度相对较大的激活。然而,这些方法难以平滑**大规模异常值**——这些异常值表现出显著更大的数值,导致低比特量化中的性能显著下降。本文提出DuQuant,一种利用旋转和置换变换的新方法,以更有效地缓解大规模和常规异常值。首先,DuQuant通过构建旋转矩阵,使用特定的异常值维度作为先验知识,通过分块旋转将异常值重新分布到相邻通道。其次,我们进一步采用锯齿形置换来平衡异常值在块间的分布,从而降低块间方差。随后的旋转进一步平滑了激活格局,提升了模型性能。DuQuant简化了量化过程,并在处理异常值方面表现优异,在多种任务中,针对不同规模和类型的LLM,即使采用4比特权重-激活量化,也超越了现有最先进的基线方法。我们的代码可在https://github.com/Hsu1023/DuQuant获取。