While the situation has improved for text-only models, it again seems to be the case currently that multimodal (text and image) models develop faster than ways to evaluate them. In this paper, we bring a recently developed evaluation paradigm from text models to multimodal models, namely evaluation through the goal-oriented game (self) play, complementing reference-based and preference-based evaluation. Specifically, we define games that challenge a model's capability to represent a situation from visual information and align such representations through dialogue. We find that the largest closed models perform rather well on the games that we define, while even the best open-weight models struggle with them. On further analysis, we find that the exceptional deep captioning capabilities of the largest models drive some of the performance. There is still room to grow for both kinds of models, ensuring the continued relevance of the benchmark.
翻译:尽管纯文本模型的状况已有所改善,但目前多模态(文本与图像)模型的发展似乎再次超越了其评估方法的进展。本文借鉴近期为文本模型开发的评估范式,将其引入多模态模型领域,即通过目标导向的游戏(自我)对弈进行评估,从而补充基于参考和基于偏好的评估方法。具体而言,我们设计了一系列游戏,旨在挑战模型从视觉信息中表征情境并通过对话对齐此类表征的能力。研究发现,规模最大的闭源模型在我们定义的游戏任务中表现相当出色,而即便是最优的开源权重模型也在此类任务中面临困难。进一步分析表明,最大规模模型卓越的深度图像描述能力是其部分性能表现的关键驱动因素。两类模型均存在提升空间,这确保了该基准测试的持续相关性。