Vision-Language Models (VLMs) achieve strong multimodal performance but are costly to deploy, and post-training quantization often causes significant accuracy loss. Despite its potential, quantization-aware training for VLMs remains underexplored. We propose GRACE, a framework unifying knowledge distillation and QAT under the Information Bottleneck principle: quantization constrains information capacity while distillation guides what to preserve within this budget. Treating the teacher as a proxy for task-relevant information, we introduce confidence-gated decoupled distillation to filter unreliable supervision, relational centered kernel alignment to transfer visual token structures, and an adaptive controller via Lagrangian relaxation to balance fidelity against capacity constraints. Across extensive benchmarks on LLaVA and Qwen families, our INT4 models consistently outperform FP16 baselines (e.g., LLaVA-1.5-7B: 70.1 vs. 66.8 on SQA; Qwen2-VL-2B: 76.9 vs. 72.6 on MMBench), nearly matching teacher performance. Using real INT4 kernel, we achieve 3$\times$ throughput with 54% memory reduction. This principled framework significantly outperforms existing quantization methods, making GRACE a compelling solution for resource-constrained deployment.
翻译:视觉语言模型在多模态任务上表现出色,但部署成本高昂,且后训练量化通常会导致显著的精度损失。尽管量化感知训练具有潜力,但在视觉语言模型中的应用仍未被充分探索。我们提出了GRACE框架,该框架基于信息瓶颈原理统一了知识蒸馏与量化感知训练:量化约束信息容量,而蒸馏则指导在此预算内保留哪些信息。将教师模型视为任务相关信息代理,我们引入了基于置信度的门控解耦蒸馏以过滤不可靠的监督、关系中心核对齐以传递视觉令牌结构,以及通过拉格朗日松弛实现的自适应控制器来平衡保真度与容量约束。在LLaVA和Qwen系列模型上的大量基准测试表明,我们的INT4模型始终优于FP16基线(例如,LLaVA-1.5-7B在SQA上:70.1对66.8;Qwen2-VL-2B在MMBench上:76.9对72.6),几乎达到教师模型的性能。使用真实的INT4内核,我们实现了3倍的吞吐量提升和54%的内存减少。这一原理性框架显著优于现有量化方法,使GRACE成为资源受限部署的理想解决方案。