Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models during iterative development due to the high cost of constructing datasets and the difficulty in isolating specific skills. In this paper, we propose VideoNIAH (Video Needle In A Haystack), a benchmark construction framework through synthetic video generation. VideoNIAH decouples video content from their query-responses by inserting unrelated visual 'needles' into original videos. The framework automates the generation of query-response pairs using predefined rules, minimizing manual labor. The queries focus on specific aspects of video understanding, enabling more skill-specific evaluations. The separation between video content and the queries also allow for increased video variety and evaluations across different lengths. Utilizing VideoNIAH, we compile a video benchmark VNBench, which includes tasks such as retrieval, ordering, and counting to evaluate three key aspects of video understanding: temporal perception, chronological ordering, and spatio-temporal coherence. We conduct a comprehensive evaluation of both proprietary and open-source models, uncovering significant differences in their video understanding capabilities across various tasks. Additionally, we perform an in-depth analysis of the test results and model configurations. Based on these findings, we provide some advice for improving video MLLM training, offering valuable insights to guide future research and model development. The code and data are available at https://github.com/joez17/VideoNIAH.
翻译:视频理解是多模态大语言模型(MLLMs)发展的关键下一步。为更好地评估MLLMs,研究者们引入了多种基准测试。然而,由于数据集构建成本高昂以及难以隔离特定技能,当前的视频基准在迭代开发过程中评估视频模型时仍效率不足。本文提出VideoNIAH(视频寻针),一种通过合成视频生成的基准构建框架。VideoNIAH通过将无关的视觉“针”插入原始视频,实现视频内容与其查询-响应的解耦。该框架使用预定义规则自动化生成查询-响应对,最大限度地减少了人工劳动。查询聚焦于视频理解的特定方面,支持更具技能针对性的评估。视频内容与查询的分离也允许增加视频多样性及跨不同时长的评估。利用VideoNIAH,我们构建了视频基准VNBench,包含检索、排序和计数等任务,以评估视频理解的三个关键方面:时序感知、时间顺序和时空一致性。我们对专有模型和开源模型进行了全面评估,揭示了它们在不同任务中视频理解能力的显著差异。此外,我们对测试结果和模型配置进行了深入分析。基于这些发现,我们为改进视频MLLM训练提供了一些建议,为未来研究和模型发展提供了有价值的见解。代码和数据可在 https://github.com/joez17/VideoNIAH 获取。