Rapid progress in video models has largely focused on visual quality, leaving their reasoning capabilities underexplored. Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over spatiotemporal structure such as continuity, interaction, and causality. However, systematically studying video reasoning and its scaling behavior is hindered by the lack of large-scale training data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks following a principled taxonomy and over one million video clips, approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning. The data, benchmark toolkit, and models are publicly available at https://video-reason.com/ .
翻译:视频模型的快速发展主要聚焦于视觉质量,其推理能力尚未得到充分探索。视频推理将智能建立在时空一致的视觉环境中,超越了文本自然捕捉的范畴,实现了对连续性、交互性和因果性等时空结构的直观推理。然而,缺乏大规模训练数据阻碍了对视频推理及其扩展行为的系统性研究。为填补这一空白,我们推出了超大规模视频推理数据集,这是一个前所未有的海量资源,涵盖依据原则性分类法构建的200项精选推理任务,包含超过一百万个视频片段,规模约为现有数据集的三个数量级。我们进一步提出了VBVR-Bench,这是一个可验证的评估框架,通过整合基于规则且与人类判断对齐的评分器,超越了基于模型的评判方式,实现了对视频推理能力的可复现、可解释的诊断。借助VBVR套件,我们开展了首批大规模视频推理扩展研究之一,并观察到模型在未见推理任务上出现早期涌现泛化迹象。VBVR共同为可泛化视频推理的下一阶段研究奠定了基础。数据、基准工具包及模型已在https://video-reason.com/公开。