Despite advances in the application of MLLMs for various video tasks, video event prediction (VEP) remains relatively underexplored. VEP requires the model to perform fine-grained temporal modeling of videos and establish logical relationships between videos and future events, which current MLLMs still struggle with. In this work, we first present a comprehensive evaluation of current leading MLLMs on the VEP task, revealing the reasons behind their inaccurate predictions, including lack of logical reasoning ability for future events prediction and insufficient utilization of visual information. To address these challenges, we propose \textbf{C}hain \textbf{o}f \textbf{E}vents (\textbf{CoE}) paradigm, which constructs temporal event chains to implicitly enforce MLLM focusing on the visual content and the logical connections between videos and future events, incentivizing model's reasoning capability with multiple training protocols. Experimental results on public benchmarks demonstrate that our method outperforms both leading open-source and commercial MLLMs, establishing a new state-of-the-art on the VEP task. Codes and models will be released soon.
翻译:尽管多模态大语言模型(MLLMs)在各种视频任务中的应用取得了进展,但视频事件预测(VEP)领域仍相对缺乏深入探索。VEP要求模型对视频进行细粒度的时间建模,并建立视频与未来事件之间的逻辑关系,而当前的MLLMs在这方面仍面临挑战。在本工作中,我们首先对当前领先的MLLMs在VEP任务上进行了全面评估,揭示了其预测不准确的原因,包括对未来事件预测缺乏逻辑推理能力以及对视觉信息利用不足。为应对这些挑战,我们提出了\textbf{事件链}(\textbf{CoE})范式,该范式通过构建时序事件链,隐式地促使MLLM专注于视频内容以及视频与未来事件之间的逻辑联系,并通过多种训练协议激励模型的推理能力。在公开基准测试上的实验结果表明,我们的方法超越了领先的开源和商业MLLMs,在VEP任务上建立了新的最优性能。代码与模型即将发布。