Vision-Language Models (VLMs) achieve strong multimodal performance but are costly to deploy, and post-training quantization often causes significant accuracy loss. Despite its potential, quantization-aware training for VLMs remains underexplored. We propose GRACE, a framework unifying knowledge distillation and QAT under the Information Bottleneck principle: quantization constrains information capacity while distillation guides what to preserve within this budget. Treating the teacher as a proxy for task-relevant information, we introduce confidence-gated decoupled distillation to filter unreliable supervision, relational centered kernel alignment to transfer visual token structures, and an adaptive controller via Lagrangian relaxation to balance fidelity against capacity constraints. Across extensive benchmarks on LLaVA and Qwen families, our INT4 models consistently outperform FP16 baselines (e.g., LLaVA-1.5-7B: 70.1 vs. 66.8 on SQA; Qwen2-VL-2B: 76.9 vs. 72.6 on MMBench), nearly matching teacher performance. Using real INT4 kernel, we achieve 3$\times$ throughput with 54% memory reduction. This principled framework significantly outperforms existing quantization methods, making GRACE a compelling solution for resource-constrained deployment.
翻译:视觉语言模型(VLMs)在多模态任务中表现出色,但部署成本高昂,且后训练量化常导致显著的精度损失。尽管量化感知训练(QAT)具有潜力,其在VLMs中的应用仍待深入探索。本文提出GRACE框架,该框架基于信息瓶颈原理将知识蒸馏与QAT统一起来:量化约束信息容量,而蒸馏则指导在此预算内应保留哪些信息。通过将教师模型视为任务相关信息的代理,我们引入了置信度门控解耦蒸馏以过滤不可靠的监督信号、关系中心核对齐以传递视觉令牌结构,以及通过拉格朗日松弛实现的自适应控制器来平衡保真度与容量约束。在LLaVA和Qwen系列模型上的广泛基准测试表明,我们的INT4模型始终优于FP16基线(例如,LLaVA-1.5-7B在SQA上达到70.1对比66.8;Qwen2-VL-2B在MMBench上达到76.9对比72.6),几乎与教师模型性能持平。使用真实的INT4计算核心,我们实现了3倍的吞吐量提升和54%的内存减少。这一原则性框架显著优于现有量化方法,使GRACE成为资源受限部署中极具吸引力的解决方案。