While reinforcement learning (RL) over chains of thought has significantly advanced language models in tasks such as mathematics and coding, visual reasoning introduces added complexity by requiring models to direct visual attention, interpret perceptual inputs, and ground abstract reasoning in spatial evidence. We introduce ViGoRL (Visually Grounded Reinforcement Learning), a vision-language model trained with RL to explicitly anchor each reasoning step to specific visual coordinates. Inspired by human visual decision-making, ViGoRL learns to produce spatially grounded reasoning traces, guiding visual attention to task-relevant regions at each step. When fine-grained exploration is required, our novel multi-turn RL framework enables the model to dynamically zoom into predicted coordinates as reasoning unfolds. Across a diverse set of visual reasoning benchmarks--including SAT-2 and BLINK for spatial reasoning, V*bench for visual search, and ScreenSpot and VisualWebArena for web-based grounding--ViGoRL consistently outperforms both supervised fine-tuning and conventional RL baselines that lack explicit grounding mechanisms. Incorporating multi-turn RL with zoomed-in visual feedback significantly improves ViGoRL's performance on localizing small GUI elements and visual search, achieving 86.4% on V*Bench. Additionally, we find that grounding amplifies other visual behaviors such as region exploration, grounded subgoal setting, and visual verification. Finally, human evaluations show that the model's visual references are not only spatially accurate but also helpful for understanding model reasoning steps. Our results show that visually grounded RL is a strong paradigm for imbuing models with general-purpose visual reasoning.
翻译:尽管基于思维链的强化学习在数学和编程等任务中显著提升了语言模型的性能,但视觉推理因其要求模型引导视觉注意力、解释感知输入并将抽象推理锚定于空间证据而引入了额外的复杂性。我们提出了ViGoRL(基于视觉接地的强化学习),这是一种通过强化学习训练的视觉-语言模型,旨在将每个推理步骤显式地锚定到特定的视觉坐标上。受人类视觉决策过程的启发,ViGoRL学习生成具有空间接地的推理轨迹,在每一步引导视觉注意力至任务相关区域。当需要细粒度探索时,我们新颖的多轮次强化学习框架使模型能够在推理过程中动态地缩放到预测的坐标位置。在一系列多样化的视觉推理基准测试中——包括用于空间推理的SAT-2和BLINK、用于视觉搜索的V*Bench,以及用于基于网络的接地任务的ScreenSpot和VisualWebArena——ViGoRL始终优于缺乏显式接地机制的有监督微调方法和传统强化学习基线。将多轮次强化学习与缩放视觉反馈相结合,显著提升了ViGoRL在定位小型GUI元素和视觉搜索任务上的性能,在V*Bench上达到了86.4%的准确率。此外,我们发现接地机制还增强了其他视觉行为,如区域探索、接地子目标设定和视觉验证。最后,人工评估表明,模型的视觉参考不仅在空间上是准确的,而且有助于理解模型的推理步骤。我们的结果表明,基于视觉接地的强化学习是赋予模型通用视觉推理能力的一种强大范式。