While generative video models have achieved remarkable visual fidelity, their capacity to internalize and reason over implicit world rules remains a critical yet under-explored frontier. To bridge this gap, we present RISE-Video, a pioneering reasoning-oriented benchmark for Text-Image-to-Video (TI2V) synthesis that shifts the evaluative focus from surface-level aesthetics to deep cognitive reasoning. RISE-Video comprises 467 meticulously human-annotated samples spanning eight rigorous categories, providing a structured testbed for probing model intelligence across diverse dimensions, ranging from commonsense and spatial dynamics to specialized subject domains. Our framework introduces a multi-dimensional evaluation protocol consisting of four metrics: \textit{Reasoning Alignment}, \textit{Temporal Consistency}, \textit{Physical Rationality}, and \textit{Visual Quality}. To further support scalable evaluation, we propose an automated pipeline leveraging Large Multimodal Models (LMMs) to emulate human-centric assessment. Extensive experiments on 11 state-of-the-art TI2V models reveal pervasive deficiencies in simulating complex scenarios under implicit constraints, offering critical insights for the advancement of future world-simulating generative models.
翻译:尽管生成式视频模型已取得显著的视觉保真度,但其对隐含世界规则的内化与推理能力仍是一个关键却尚未充分探索的前沿领域。为弥合这一差距,我们提出了RISE-Video——一个面向推理的、开创性的文本-图像到视频(TI2V)合成基准,其将评估重点从表层美学转向深度认知推理。RISE-Video包含467个经过人工精细标注的样本,涵盖八个严谨的类别,为探究模型在从常识与空间动力学到专业学科领域等不同维度上的智能提供了一个结构化的测试平台。我们的框架引入了一个包含四个指标的多维评估协议:\textit{推理对齐度}、\textit{时序一致性}、\textit{物理合理性}和\textit{视觉质量}。为进一步支持可扩展的评估,我们提出了一个自动化流程,利用大型多模态模型(LMMs)来模拟以人为中心的评估。在11个最先进的TI2V模型上进行的大量实验揭示了它们在模拟隐含约束下的复杂场景时普遍存在的不足,这为未来世界模拟生成模型的进步提供了关键见解。