Long videos, characterized by temporal complexity and sparse task-relevant information, pose significant reasoning challenges for AI systems. Although various Large Language Model (LLM)-based approaches have advanced long video understanding, they still struggle to achieve both completeness and efficiency in capturing task-critical information. Inspired by human progressive visual cognition, we propose CogniGPT, a framework that leverages an interactive loop between Multi-Granular Perception Agent (MGPA) and Verification-Enhanced Reflection Agent (VERA) for efficient and reliable long video understanding. Specifically, MGPA mimics human visual divergent and focused attention to capture task-related information, while VERA verifies perceived key clues to mitigate hallucination and optimize subsequent perception strategies. Through this interactive process, CogniGPT explores a minimal set of informative and reliable task-related clues. Extensive experiments on EgoSchema, Video-MME, NExT-QA, and MovieChat datasets demonstrate CogniGPT's superiority in both accuracy and efficiency. Notably, on EgoSchema, it surpasses existing training-free methods using only 11.2 frames and achieves performance comparable to Gemini 1.5-Pro.
翻译:长视频因其时序复杂性和任务相关信息稀疏性,对人工智能系统提出了显著的推理挑战。尽管各类基于大语言模型的方法已推进了长视频理解研究,但在捕捉任务关键信息时仍难以兼顾完整性与效率。受人类渐进式视觉认知机制启发,我们提出CogniGPT框架,该框架通过多粒度感知智能体与验证增强反思智能体之间的交互循环,实现高效可靠的长视频理解。具体而言,MGPA模拟人类视觉的发散性与聚焦性注意机制以捕捉任务相关信息,而VERA则通过验证感知到的关键线索来缓解幻觉现象并优化后续感知策略。通过这种交互过程,CogniGPT能够探索最小化的信息可靠任务相关线索集。在EgoSchema、Video-MME、NExT-QA和MovieChat数据集上的大量实验证明了CogniGPT在准确性与效率方面的优越性。值得注意的是,在EgoSchema数据集上,该方法仅使用11.2帧就超越了现有无训练方法,并达到了与Gemini 1.5-Pro相当的性能水平。