We propose a dedicated multimodal Judge Model designed to provide reliable, explainable evaluation across a diverse suite of tasks. Our benchmark spans text, audio, image, and video modalities, drawing from carefully sampled public datasets with fixed seeds to ensure reproducibility and minimize train test leakage. Instead of simple scoring, our framework aggregates multimodal judgments, analyzes the quality and reasoning consistency of model outputs, and generates diagnostic feedback. We evaluate several MLLMs, including Gemini 2.5, Phi 4, and Qwen 2.5, across 280 multimodal samples and compare judge model assessments with human annotators. Results show strong alignment between the Judge Model and human scores, demonstrating its potential as a scalable, interpretable evaluation pipeline for future multimodal AI research.
翻译:我们提出了一种专用的多模态评判模型,旨在为多样化的任务套件提供可靠、可解释的评估。我们的基准涵盖文本、音频、图像和视频模态,数据来源于经过精心抽样、使用固定种子的公共数据集,以确保可复现性并最小化训练-测试泄漏风险。与简单的评分不同,我们的框架聚合多模态判断,分析模型输出的质量与推理一致性,并生成诊断性反馈。我们在280个多模态样本上评估了包括Gemini 2.5、Phi 4和Qwen 2.5在内的多个MLLM,并将评判模型的评估结果与人工标注者的评分进行比较。结果表明,评判模型与人工评分高度一致,证明了其作为未来多模态人工智能研究中可扩展、可解释评估流程的潜力。