This paper describes a semi-automatic pipeline to generate challenging question-answer-decoy sets for understanding long videos. Many existing video datasets and models are focused on short clips (10s-30s). While some long video datasets do exist, they can often be solved by powerful image models applied per frame (and often to very few frames) in a video, and are usually manually annotated at high cost. In order to mitigate both these problems, we propose a scalable dataset creation pipeline which leverages large models (VLMs and LLMs), to automatically generate dense, time-aligned video captions, as well as tough question answer decoy sets for video segments (up to 15 minutes in length). Our dataset Neptune covers a broad range of long video reasoning abilities and consists of a subset that emphasizes multimodal reasoning. Since existing metrics for open-ended question answering are either rule-based or may rely on proprietary models, we provide a new open source model-based metric GEM to score open-ended responses on Neptune. Benchmark evaluations reveal that most current open-source long video models perform poorly on Neptune, particularly on questions testing temporal ordering, counting and state changes. Through Neptune, we aim to spur the development of more advanced models capable of understanding long videos. The dataset is available at https://github.com/google-deepmind/neptune
翻译:本文描述了一种半自动化流程,用于生成长视频理解中具有挑战性的问题-答案-干扰项集合。现有许多视频数据集和模型主要关注短视频片段(10-30秒)。虽然存在部分长视频数据集,但它们往往可以通过逐帧(且通常仅处理极少数帧)应用的强大图像模型来解决,且通常需要高成本的人工标注。为缓解这两类问题,我们提出一种可扩展的数据集构建流程,利用大模型(视觉语言模型与大型语言模型)自动生成密集的时间对齐视频描述,以及针对视频片段(长度可达15分钟)的复杂问题-答案-干扰项集合。我们的数据集Neptune涵盖广泛的长视频推理能力,并包含强调多模态推理的子集。由于现有开放式问答评估指标要么基于规则,要么依赖专有模型,我们提供了基于开源模型的新型评估指标GEM,用于对Neptune中的开放式回答进行评分。基准测试表明,当前多数开源长视频模型在Neptune上表现欠佳,尤其在测试时序排序、计数和状态变化的问题上。通过Neptune,我们旨在推动能够理解长视频的更先进模型的研发。数据集发布于https://github.com/google-deepmind/neptune