Rapid progress in video models has largely focused on visual quality, leaving their reasoning capabilities underexplored. Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over spatiotemporal structure such as continuity, interaction, and causality. However, systematically studying video reasoning and its scaling behavior is hindered by the lack of large-scale training data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks following a principled taxonomy and over one million video clips, approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning. The data, benchmark toolkit, and models are publicly available at https://video-reason.com/ .
翻译:视频模型的快速发展主要聚焦于视觉质量,其推理能力尚未得到充分探索。视频推理将智能建立在时空一致的视觉环境中,这超越了文本自然捕捉的范畴,使得对连续性、交互性和因果性等时空结构的直观推理成为可能。然而,大规模训练数据的缺乏阻碍了对视频推理及其扩展行为的系统性研究。为填补这一空白,我们引入了超大规模视频推理数据集,这是一个前所未有的、遵循原则性分类法构建的大规模资源,涵盖200个精选推理任务和超过一百万个视频片段,其规模约为现有数据集的三个数量级。我们进一步提出了VBVR-Bench,一个可验证的评估框架,它超越了基于模型的评判,通过整合基于规则、与人类对齐的评分器,实现了对视频推理能力的可复现和可解释的诊断。利用VBVR套件,我们开展了首批大规模视频推理扩展研究之一,并观察到了对未见推理任务出现早期涌现泛化的迹象。总之,VBVR为可泛化视频推理的下一阶段研究奠定了基础。数据、基准工具包和模型已在 https://video-reason.com/ 公开提供。