The presence of outliers in Large Language Models (LLMs) weights and activations makes them difficult to quantize. Recent work has leveraged rotations to mitigate these outliers. In this work, we propose methods that learn fusible rotations by minimizing principled and cheap proxy objectives to the weight quantization error. We primarily focus on GPTQ as the quantization method. Our main method is OptRot, which reduces weight outliers simply by minimizing the element-wise fourth power of the rotated weights. We show that OptRot outperforms both Hadamard rotations and more expensive, data-dependent methods like SpinQuant and OSTQuant for weight quantization. It also improves activation quantization in the W4A8 setting. We also propose a data-dependent method, OptRot$^{+}$, that further improves performance by incorporating information on the activation covariance. In the W4A4 setting, we see that both OptRot and OptRot$^{+}$ perform worse, highlighting a trade-off between weight and activation quantization.
翻译:大型语言模型(LLMs)权重和激活值中存在的异常值使其难以量化。近期研究利用旋转操作来缓解这些异常值。本文提出通过学习可融合的旋转来最小化权重量化误差的合理且低成本的代理目标。我们主要关注以GPTQ作为量化方法。我们的核心方法是OptRot,该方法通过最小化旋转后权重的逐元素四次方来简单减少权重异常值。研究表明,在权重量化方面,OptRot优于哈达玛旋转以及更昂贵、数据依赖的方法(如SpinQuant和OSTQuant)。在W4A8配置下,该方法还能改善激活值量化。我们还提出了一种数据依赖方法OptRot⁺,通过融入激活协方差信息进一步提升性能。在W4A4配置下,OptRot和OptRot⁺的表现均有所下降,这凸显了权重量化与激活值量化之间的权衡关系。