Recent work shows that large multimodal models (LMMs) can self-improve from unlabeled data via self-play and intrinsic feedback. Yet existing self-evolving frameworks mainly reward final outcomes, leaving intermediate reasoning weakly constrained despite its importance for visually grounded decision making. We propose iReasoner, a self-evolving framework that improves an LMM's implicit reasoning by explicitly eliciting chain-of-thought (CoT) and rewarding its internal agreement. In a Proposer--Solver loop over unlabeled images, iReasoner augments outcome-level intrinsic rewards with a trajectory-aware signal defined over intermediate reasoning steps, providing learning signals that distinguish reasoning paths leading to the same answer without ground-truth labels or external judges. Starting from Qwen2.5-VL-7B, iReasoner yields up to $+2.1$ points across diverse multimodal reasoning benchmarks under fully unsupervised post-training. We hope this work serves as a starting point for reasoning-aware self-improvement in LMMs in purely unsupervised settings.
翻译:近期研究表明,大型多模态模型(LMMs)能够通过自我博弈和内在反馈机制,利用无标注数据实现自我改进。然而,现有的自演进框架主要奖励最终结果,尽管中间推理过程对于视觉基础决策至关重要,却对其约束较弱。我们提出iReasoner,一种自演进框架,通过显式激发思维链(CoT)并奖励其内部一致性,以改进LMM的隐式推理能力。在无标注图像上构建的“提议者-求解器”循环中,iReasoner在结果层面的内在奖励基础上,增加了针对中间推理步骤的轨迹感知信号,从而提供能够区分导致相同答案的不同推理路径的学习信号,且无需真实标签或外部评判器。以Qwen2.5-VL-7B为起点,iReasoner在全无监督的后训练下,在多种多模态推理基准测试中实现了高达$+2.1$分的性能提升。我们希望这项工作能为纯无监督环境下LMM的推理感知型自我改进研究提供起点。