Video reasoning, which requires multi-step deduction across frames, remains a major challenge for multimodal large language models (MLLMs). While reinforcement learning (RL)-based methods enhance reasoning capabilities, they often rely on text-only chains that yield ungrounded or hallucinated conclusions. Conversely, frame-retrieval approaches introduce visual grounding but still struggle with inaccurate evidence localization. To address these challenges, we present Conan, a framework for evidence-grounded multi-step video reasoning. Conan identifies contextual and evidence frames, reasons over cross-frame clues, and adaptively decides when to conclude or explore further. To achieve this, we (1) construct Conan-91K, a large-scale dataset of automatically generated reasoning traces that includes frame identification, evidence reasoning, and action decision, and (2) design a multi-stage progressive cold-start strategy combined with an Identification-Reasoning-Action (AIR) RLVR training framework to jointly enhance multi-step visual reasoning. Extensive experiments on six multi-step reasoning benchmarks demonstrate that Conan surpasses the baseline Qwen2.5-VL-7B-Instruct by an average of over 10% in accuracy, achieving state-of-the-art performance. Furthermore, Conan generalizes effectively to long-video understanding tasks, validating its strong scalability and robustness.
翻译:视频推理需要在多帧之间进行多步推演,这仍然是多模态大语言模型(MLLMs)面临的主要挑战。基于强化学习(RL)的方法虽然能增强推理能力,但通常依赖纯文本推理链,导致结论缺乏依据或产生幻觉。相反,基于帧检索的方法引入了视觉依据,但在证据定位的准确性上仍存在困难。为解决这些挑战,我们提出了Conan,一个基于证据的多步视频推理框架。Conan能够识别上下文帧和证据帧,对跨帧线索进行推理,并自适应地决定何时结束推理或继续探索。为实现这一目标,我们(1)构建了Conan-91K,一个包含自动生成推理轨迹的大规模数据集,其中涵盖帧识别、证据推理和行动决策;(2)设计了一种多阶段渐进式冷启动策略,结合识别-推理-行动(AIR)RLVR训练框架,共同提升多步视觉推理能力。在六个多步推理基准测试上的大量实验表明,Conan在准确率上平均超越基线模型Qwen2.5-VL-7B-Instruct超过10%,达到了最先进的性能。此外,Conan能有效泛化至长视频理解任务,验证了其强大的可扩展性和鲁棒性。