Large language models (LLMs) show impressive performance in solving complex language tasks. However, its large number of parameters present significant challenges for the deployment and application of the model on edge devices. Compressing large language models to low bits can enable them to run on resource-constrained devices, often leading to performance degradation. To address this problem, we propose gradient-aware weight quantization (GWQ), the first quantization approach for low-bit weight quantization that leverages gradients to localize outliers, requiring only a minimal amount of calibration data for outlier detection. GWQ retains the weights corresponding to the top 1% outliers preferentially at FP16 precision, while the remaining non-outlier weights are stored in a low-bit format. GWQ found experimentally that utilizing the sensitive weights in the gradient localization model is more scientific compared to utilizing the sensitive weights in the Hessian matrix localization model. Compared to current quantization methods, GWQ can be applied to multiple language models and achieves lower PPL on the WikiText2 and C4 dataset. In the zero-shot task, GWQ quantized models have higher accuracy compared to other quantization methods. GWQ is also suitable for multimodal model quantization, and the quantized Qwen-VL family model is more accurate than other methods. Zero-shot target detection task dataset RefCOCO outperforms the current stat-of-the-arts method SPQR. GWQ achieves 1.2 times inference speedup in comparison to the original model, and effectively reduces the inference memory.
翻译:大语言模型(LLM)在解决复杂语言任务方面展现出卓越性能。然而,其庞大的参数量为模型在边缘设备上的部署与应用带来了显著挑战。将大语言模型压缩至低比特位宽可使其在资源受限设备上运行,但通常会导致性能下降。为解决该问题,我们提出梯度感知权重量化(GWQ),这是首个利用梯度定位异常值的低比特权重量化方法,仅需极少量的校准数据进行异常值检测。GWQ优先将对应前1%异常值的权重以FP16精度保留,其余非异常值权重则以低比特格式存储。实验发现,相较于利用海森矩阵定位模型中的敏感权重,采用梯度定位模型中的敏感权重更具科学性。与现有量化方法相比,GWQ可应用于多种语言模型,并在WikiText2和C4数据集上实现了更低的困惑度(PPL)。在零样本任务中,经GWQ量化的模型相比其他量化方法具有更高准确率。GWQ同样适用于多模态模型量化,经量化的Qwen-VL系列模型相比其他方法精度更高,在零样本目标检测任务数据集RefCOCO上超越了当前最优方法SPQR。GWQ相比原始模型实现了1.2倍的推理加速,并有效降低了推理内存占用。