Long-form video understanding remains challenging due to the extended temporal structure and dense multimodal cues. Despite recent progress, many existing approaches still rely on hand-crafted reasoning pipelines or employ token-consuming video preprocessing to guide MLLMs in autonomous reasoning. To overcome these limitations, we introduce VideoARM, an Agentic Reasoning-over-hierarchical-Memory paradigm for long-form video understanding. Instead of static, exhaustive preprocessing, VideoARM performs adaptive, on-the-fly agentic reasoning and memory construction. Specifically, VideoARM performs an adaptive and continuous loop of observing, thinking, acting, and memorizing, where a controller autonomously invokes tools to interpret the video in a coarse-to-fine manner, thereby substantially reducing token consumption. In parallel, a hierarchical multimodal memory continuously captures and updates multi-level clues throughout the operation of the agent, providing precise contextual information to support the controller in decision-making. Experiments on prevalent benchmarks demonstrate that VideoARM outperforms the state-of-the-art method, DVD, while significantly reducing token consumption for long-form videos.
翻译:长视频理解因其复杂的时序结构和密集的多模态线索而面临挑战。尽管近期研究取得进展,但现有方法仍多依赖人工设计的推理流水线或采用高耗令牌的视频预处理方式引导多模态大语言模型进行自主推理。为突破这些局限,我们提出VideoARM——一种基于层级记忆的智能推理范式用于长视频理解。VideoARM摒弃静态的全面预处理,采用自适应的实时智能推理与记忆构建机制。具体而言,VideoARM执行"观察-思考-行动-记忆"的自适应持续循环:控制器自主调用工具以由粗到精的方式解析视频,从而显著降低令牌消耗;同时,层级多模态记忆在智能体运行过程中持续捕获并更新多级线索,为控制器决策提供精准的上下文信息。在主流基准测试上的实验表明,VideoARM在显著降低长视频令牌消耗的同时,性能超越现有最先进方法DVD。