Mamba is an efficient sequence model that rivals Transformers and demonstrates significant potential as a foundational architecture for various tasks. Quantization is commonly used in neural networks to reduce model size and computational latency. However, applying quantization to Mamba remains underexplored, and existing quantization methods, which have been effective for CNN and Transformer models, appear inadequate for Mamba models (e.g., Quarot suffers a 21% accuracy drop on Vim-T$^\dagger$ even under W8A8). We have pioneered the exploration of this issue and identified several key challenges. First, significant outliers are present in gate projections, output projections, and matrix multiplications. Second, Mamba's unique parallel scan further amplifies these outliers, leading to uneven and heavy-tailed data distributions. Third, even with the application of the Hadamard transform, the variance across channels in weights and activations still remains inconsistent. To these ends, we propose MambaQuant, a post-training quantization (PTQ) framework consisting of: 1) Karhunen-Loeve Transformation (KLT) enhanced rotation, rendering the rotation matrix adaptable to diverse channel distributions. 2) Smooth-Fused rotation, which equalizes channel variances and can merge additional parameters into model weights. Experiments show that MambaQuant can quantize both weights and activations into 8-bit with less than 1% accuracy loss for Mamba-based vision and language tasks. To the best of our knowledge, MambaQuant is the first comprehensive PTQ design for the Mamba family, paving the way for further advancements in its application.
翻译:Mamba是一种高效的序列模型,其性能可与Transformer相媲美,并展现出作为多种任务基础架构的重要潜力。量化是神经网络中常用的技术,用于减小模型规模和计算延迟。然而,将量化应用于Mamba模型的研究仍处于探索不足的阶段,且对CNN和Transformer模型有效的现有量化方法(如Quarot在W8A8配置下对Vim-T$^\dagger$的准确率下降达21%)对Mamba模型效果有限。我们率先对此问题展开探索,并识别出若干关键挑战:首先,门控投影、输出投影及矩阵乘法中存在显著离群值;其次,Mamba独特的并行扫描机制进一步放大了这些离群值,导致数据分布呈现不均衡的重尾特征;第三,即使应用Hadamard变换,权重与激活中跨通道的方差仍存在不一致性。为此,我们提出MambaQuant——一个包含以下组件的训练后量化(PTQ)框架:1)基于Karhunen-Loeve变换(KLT)增强的旋转方法,使旋转矩阵能适应多样化的通道分布;2)平滑融合旋转技术,可均衡通道方差并将额外参数合并至模型权重中。实验表明,MambaQuant能将权重和激活同时量化为8位,在基于Mamba的视觉与语言任务中实现低于1%的准确率损失。据我们所知,MambaQuant是首个面向Mamba系列模型的完整PTQ设计方案,为其应用推广奠定了重要基础。