Image Quality Assessment (IQA) is a long-standing problem in computer vision. Previous methods typically focus on predicting numerical scores without explanation or providing low-level descriptions lacking precise scores. Recent reasoning-based vision language models (VLMs) have shown strong potential for IQA by jointly generating quality descriptions and scores. However, existing VLM-based IQA methods often suffer from unreliable reasoning due to their limited capability of integrating visual and textual cues. In this work, we introduce Zoom-IQA, a VLM-based IQA model to explicitly emulate key cognitive behaviors: uncertainty awareness, region reasoning, and iterative refinement. Specifically, we present a two-stage training pipeline: 1) supervised fine-tuning (SFT) on our Grounded-Rationale-IQA (GR-IQA) dataset to teach the model to ground its assessments in key regions, and 2) reinforcement learning (RL) for dynamic policy exploration, stabilized by our KL-Coverage regularizer to prevent reasoning and scoring diversity collapse, with a Progressive Re-sampling Strategy for mitigating annotation bias. Extensive experiments show that Zoom-IQA achieves improved robustness, explainability, and generalization. The application to downstream tasks, such as image restoration, further demonstrates the effectiveness of Zoom-IQA.
翻译:图像质量评估(IQA)是计算机视觉领域一个长期存在的问题。以往的方法通常侧重于预测数值分数而不提供解释,或仅提供缺乏精确分数的低级描述。近期基于推理的视觉语言模型(VLMs)通过联合生成质量描述和分数,在IQA任务中展现出强大潜力。然而,现有基于VLM的IQA方法由于整合视觉与文本线索的能力有限,常存在推理不可靠的问题。本工作提出Zoom-IQA,一种基于VLM的IQA模型,其显式模拟了关键认知行为:不确定性感知、区域推理与迭代优化。具体而言,我们设计了一个两阶段训练流程:1)在我们构建的Grounded-Rationale-IQA(GR-IQA)数据集上进行监督微调(SFT),使模型学会将评估依据锚定于关键区域;2)通过强化学习(RL)进行动态策略探索,并采用我们提出的KL-Coverage正则化器稳定训练过程以防止推理与评分多样性退化,同时结合渐进式重采样策略以缓解标注偏差。大量实验表明,Zoom-IQA在鲁棒性、可解释性与泛化能力方面均取得提升。在图像修复等下游任务中的应用进一步验证了Zoom-IQA的有效性。