Multimodal Large Language Models (MLLMs) have recently been applied to universal multimodal retrieval, where Chain-of-Thought (CoT) reasoning improves candidate reranking. However, existing approaches remain largely language-driven, relying on static visual encodings and lacking the ability to actively verify fine-grained visual evidence, which often leads to speculative reasoning in visually ambiguous cases. We propose V-Retrver, an evidence-driven retrieval framework that reformulates multimodal retrieval as an agentic reasoning process grounded in visual inspection. V-Retrver enables an MLLM to selectively acquire visual evidence during reasoning via external visual tools, performing a multimodal interleaved reasoning process that alternates between hypothesis generation and targeted visual verification.To train such an evidence-gathering retrieval agent, we adopt a curriculum-based learning strategy combining supervised reasoning activation, rejection-based refinement, and reinforcement learning with an evidence-aligned objective. Experiments across multiple multimodal retrieval benchmarks demonstrate consistent improvements in retrieval accuracy (with 23.0% improvements on average), perception-driven reasoning reliability, and generalization.
翻译:多模态大语言模型(MLLMs)近期已被应用于通用多模态检索任务,其中思维链(CoT)推理能够提升候选结果的重新排序性能。然而,现有方法在很大程度上仍以语言为驱动,依赖于静态的视觉编码,缺乏对细粒度视觉证据进行主动验证的能力,这往往导致在视觉模糊场景中出现推测性推理。我们提出V-Retrver,一种证据驱动的检索框架,它将多模态检索重新定义为基于视觉检查的智能体推理过程。V-Retrver使MLLM能够通过外部视觉工具在推理过程中选择性地获取视觉证据,执行一种多模态交错推理过程,该过程在假设生成与针对性视觉验证之间交替进行。为训练这种证据收集式检索智能体,我们采用基于课程学习的策略,结合了监督式推理激活、基于拒绝的优化以及采用证据对齐目标的强化学习。在多个多模态检索基准上的实验表明,该方法在检索准确率(平均提升23.0%)、感知驱动的推理可靠性以及泛化能力方面均取得了一致的提升。