The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multimodal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessment of their ability to understand video compositions, the nuanced interpretation of how visual elements combine and interact within highly compiled video contexts. We introduce VidComposition, a new benchmark specifically designed to evaluate the video composition understanding capabilities of MLLMs using carefully curated compiled videos and cinematic-level annotations. VidComposition includes 982 videos with 1706 multiple-choice questions, covering various compositional aspects such as camera movement, angle, shot size, narrative structure, character actions and emotions, etc. Our comprehensive evaluation of 33 open-source and proprietary MLLMs reveals a significant performance gap between human and model capabilities. This highlights the limitations of current MLLMs in understanding complex, compiled video compositions and offers insights into areas for further improvement. The leaderboard and evaluation code are available at https://yunlong10.github.io/VidComposition/.
翻译:多模态大语言模型(MLLMs)的发展推动了多模态理解领域的显著进步,扩展了其分析视频内容的能力。然而,现有针对MLLMs的评估基准主要集中于抽象的视频理解,缺乏对其视频构图理解能力的细致评估——即对高度合成视频语境中视觉元素如何组合与互动的细微解读。我们提出了VidComposition,这是一个专门设计的新基准,旨在通过精心策划的合成视频和电影级标注来评估MLLMs的视频构图理解能力。VidComposition包含982个视频和1706道选择题,涵盖多种构图维度,如摄像机运动、拍摄角度、景别、叙事结构、角色动作与情感等。我们对33个开源和专有MLLMs进行的全面评估揭示了人类与模型能力之间存在显著性能差距。这凸显了当前MLLMs在理解复杂合成视频构图方面的局限性,并为未来改进方向提供了见解。排行榜与评估代码可在 https://yunlong10.github.io/VidComposition/ 获取。