Image Quality Assessment (IQA) is a long-standing problem in computer vision. Previous methods typically focus on predicting numerical scores without explanation or provide low-level descriptions lacking precise scores. Recent reasoning-based vision language models (VLMs) have shown strong potential for IQA, enabling joint generation of quality descriptions and scores. However, we notice that existing VLM-based IQA methods tend to exhibit unreliable reasoning due to their limited capability of integrating visual and textual cues. In this work, we introduce Zoom-IQA, a VLM-based IQA model to explicitly emulate key cognitive behaviors: uncertainty awareness, region reasoning, and iterative refinement. Specifically, we present a two-stage training pipeline: 1) supervised fine-tuning (SFT) on our Grounded-Rationale-IQA (GR-IQA) dataset to teach the model to ground its assessments in key regions; and 2) reinforcement learning (RL) for dynamic policy exploration, primarily stabilized by our KL-Coverage regularizer to prevent reasoning and scoring diversity collapse, and supported by a Progressive Re-sampling Strategy to mitigate annotation bias. Extensive experiments show that Zoom-IQA achieves improved robustness, explainability, and generalization. The application to downstream tasks, such as image restoration, further demonstrates the effectiveness of Zoom-IQA.
翻译:图像质量评估(IQA)是计算机视觉领域一个长期存在的问题。以往的方法通常侧重于预测数值分数而不提供解释,或仅提供缺乏精确分数的低层次描述。近期基于推理的视觉语言模型(VLMs)在IQA任务中展现出巨大潜力,能够同时生成质量描述与分数。然而,我们注意到现有基于VLM的IQA方法由于整合视觉与文本线索的能力有限,往往表现出不可靠的推理。本工作中,我们提出Zoom-IQA,一种基于VLM的IQA模型,旨在显式模拟关键认知行为:不确定性感知、区域推理与迭代优化。具体而言,我们设计了一个两阶段训练流程:1)在我们的Grounded-Rationale-IQA(GR-IQA)数据集上进行监督微调(SFT),使模型学会将评估依据锚定于关键区域;2)通过强化学习(RL)进行动态策略探索,主要依靠我们提出的KL-Coverage正则化器稳定训练以防止推理与评分多样性坍缩,并辅以渐进式重采样策略来缓解标注偏差。大量实验表明,Zoom-IQA在鲁棒性、可解释性与泛化能力方面均获得提升。在图像修复等下游任务中的应用进一步验证了Zoom-IQA的有效性。