Efficiently retrieving and synthesizing information from large-scale multimodal collections has become a critical challenge. However, existing video retrieval datasets suffer from scope limitations, primarily focusing on matching descriptive but vague queries with small collections of professionally edited, English-centric videos. To address this gap, we introduce $\textbf{MultiVENT 2.0}$, a large-scale, multilingual event-centric video retrieval benchmark featuring a collection of more than 218,000 news videos and 3,906 queries targeting specific world events. These queries specifically target information found in the visual content, audio, embedded text, and text metadata of the videos, requiring systems leverage all these sources to succeed at the task. Preliminary results show that state-of-the-art vision-language models struggle significantly with this task, and while alternative approaches show promise, they are still insufficient to adequately address this problem. These findings underscore the need for more robust multimodal retrieval systems, as effective video retrieval is a crucial step towards multimodal content understanding and generation.
翻译:从大规模多模态集合中高效检索与综合信息已成为一项关键挑战。然而,现有的视频检索数据集存在范围局限,主要侧重于将描述性但模糊的查询与小型、专业编辑、以英语为中心的视频集合进行匹配。为弥补这一空白,我们引入了 $\textbf{MultiVENT 2.0}$,这是一个大规模、多语言、以事件为中心的视频检索基准,其集合包含超过 218,000 个新闻视频和 3,906 个针对特定世界事件的查询。这些查询专门针对视频视觉内容、音频、嵌入文本以及文本元数据中的信息,要求系统必须综合利用所有这些信息源才能成功完成任务。初步结果表明,最先进的视觉-语言模型在此任务上表现显著不佳,而其他替代方法虽显示出潜力,但仍不足以充分解决此问题。这些发现凸显了对更鲁棒的多模态检索系统的需求,因为有效的视频检索是实现多模态内容理解与生成的关键一步。