Image Quality Assessment (IQA) has progressed from scalar quality prediction to more interpretable, human-aligned evaluation paradigms. In this work, we address the emerging challenge of detailed and explainable IQA by proposing iDETEX-a unified multimodal large language model (MLLM) capable of simultaneously performing three key tasks: quality grounding, perception, and description. To facilitate efficient and generalizable training across these heterogeneous subtasks, we design a suite of task-specific offline augmentation modules and a data mixing strategy. These are further complemented by online enhancement strategies to fully exploit multi-sourced supervision. We validate our approach on the large-scale ViDA-UGC benchmark, where iDETEX achieves state-of-the-art performance across all subtasks. Our model ranks first in the ICCV MIPI 2025 Detailed Image Quality Assessment Challenge, demonstrating its effectiveness and robustness in delivering accurate and interpretable quality assessments.
翻译:图像质量评估(IQA)已从标量质量预测,发展到更具可解释性、更符合人类感知的评估范式。在本工作中,我们通过提出iDETEX来解决新兴的、详细且可解释的IQA挑战。iDETEX是一个统一的多模态大语言模型(MLLM),能够同时执行三个关键任务:质量定位、感知与描述。为了促进这些异构子任务的高效且可泛化的训练,我们设计了一套针对特定任务的离线增强模块和数据混合策略。这些方法辅以在线增强策略,以充分利用多源监督信息。我们在大规模ViDA-UGC基准测试上验证了我们的方法,iDETEX在所有子任务上均取得了最先进的性能。我们的模型在ICCV MIPI 2025详细图像质量评估挑战赛中排名第一,证明了其在提供准确且可解释的质量评估方面的有效性和鲁棒性。