Recent work shows that large multimodal models (LMMs) can self-improve from unlabeled data via self-play and intrinsic feedback. Yet existing self-evolving frameworks mainly reward final outcomes, leaving intermediate reasoning weakly constrained despite its importance for visually grounded decision making. We propose iReasoner, a self-evolving framework that improves an LMM's implicit reasoning by explicitly eliciting chain-of-thought (CoT) and rewarding its internal agreement. In a Proposer--Solver loop over unlabeled images, iReasoner augments outcome-level intrinsic rewards with a trajectory-aware signal defined over intermediate reasoning steps, providing learning signals that distinguish reasoning paths leading to the same answer without ground-truth labels or external judges. Starting from Qwen2.5-VL-7B, iReasoner yields up to $+2.1$ points across diverse multimodal reasoning benchmarks under fully unsupervised post-training. We hope this work serves as a starting point for reasoning-aware self-improvement in LMMs in purely unsupervised settings.
翻译:近期研究表明,大型多模态模型(LMMs)能够通过自我博弈与内在反馈机制,利用无标注数据实现自我提升。然而,现有自进化框架主要奖励最终结果,尽管中间推理过程对视觉基础决策至关重要,却对其约束较弱。本文提出iReasoner——一种通过显式激发思维链(CoT)并奖励其内部一致性的自进化框架,旨在提升LMM的隐式推理能力。在针对无标注图像的“提议者-求解器”循环中,iReasoner在结果级内在奖励的基础上,引入了基于中间推理步骤的轨迹感知信号,该信号能够在无需真实标签或外部评判的情况下,区分导致相同答案的不同推理路径。以Qwen2.5-VL-7B为起点,在完全无监督的后训练设置下,iReasoner在多样化的多模态推理基准测试中实现了最高达$+2.1$分的性能提升。我们希望这项工作能为纯无监督环境下LMM的推理感知自我改进研究提供起点。