Existing multimodal large language models for long-video understanding predominantly rely on uniform sampling and single-turn inference, limiting their ability to identify sparse yet critical evidence amid extensive redundancy. We introduce Video-o3, a novel framework that supports iterative discovery of salient visual clues, fine-grained inspection of key segments, and adaptive termination once sufficient evidence is acquired. Technically, we address two core challenges in interleaved tool invocation. First, to mitigate attention dispersion induced by the heterogeneity of reasoning and tool-calling, we propose Task-Decoupled Attention Masking, which isolates per-step concentration while preserving shared global context. Second, to control context length growth in multi-turn interactions, we introduce a Verifiable Trajectory-Guided Reward that balances exploration coverage with reasoning efficiency. To support training at scale, we further develop a data synthesis pipeline and construct Seeker-173K, comprising 173K high-quality tool-interaction trajectories for effective supervised and reinforcement learning. Extensive experiments show that Video-o3 substantially outperforms state-of-the-art methods, achieving 72.1% accuracy on MLVU and 46.5% on Video-Holmes. These results demonstrate Video-o3's strong multi-hop evidence-seeking and reasoning capabilities, and validate the effectiveness of native tool invocation in long-video scenarios.
翻译:现有用于长视频理解的多模态大语言模型主要依赖于均匀采样和单轮推理,限制了其在大量冗余信息中识别稀疏但关键证据的能力。我们提出了Video-o3,这是一个新颖的框架,支持迭代式发现显著视觉线索、对关键片段进行细粒度检查,并在获取足够证据后自适应终止推理。在技术上,我们解决了交错式工具调用中的两个核心挑战。首先,为缓解由推理与工具调用的异构性引起的注意力分散,我们提出了任务解耦注意力掩码机制,该机制在保持共享全局上下文的同时,隔离了每一步的注意力集中。其次,为控制多轮交互中上下文长度的增长,我们引入了一种可验证轨迹引导奖励,以平衡探索覆盖度与推理效率。为支持大规模训练,我们进一步开发了数据合成流程,并构建了Seeker-173K数据集,其中包含17.3万条高质量的工具交互轨迹,用于有效的监督学习和强化学习。大量实验表明,Video-o3显著优于现有最先进方法,在MLVU上达到72.1%的准确率,在Video-Holmes上达到46.5%。这些结果证明了Video-o3强大的多跳证据探寻与推理能力,并验证了原生工具调用在长视频场景中的有效性。