As agentic systems increasingly rely on reinforcement learning from verifiable rewards, standardized ``gym'' infrastructure has become essential for rapid iteration, reproducibility, and fair comparison. Vision agents lack such infrastructure, limiting systematic study of what drives their learning and where current models fall short. We introduce \textbf{Gym-V}, a unified platform of 179 procedurally generated visual environments across 10 domains with controllable difficulty, enabling controlled experiments that were previously infeasible across fragmented toolkits. Using it, we find that observation scaffolding is more decisive for training success than the choice of RL algorithm, with captions and game rules determining whether learning succeeds at all. Cross-domain transfer experiments further show that training on diverse task categories generalizes broadly while narrow training can cause negative transfer, with multi-turn interaction amplifying all of these effects. Gym-V is released as a convenient foundation for training environments and evaluation toolkits, aiming to accelerate future research on agentic VLMs.
翻译:随着智能系统日益依赖可验证奖励的强化学习,标准化的“gym”基础设施已成为快速迭代、可复现性与公平比较的关键要素。视觉智能体目前缺乏此类基础设施,这限制了对驱动其学习的关键因素及现有模型不足之处的系统性研究。本文提出**Gym-V**——一个涵盖10个领域、包含179个程序生成视觉环境的统一平台,其难度可调控,使得以往在碎片化工具包中难以实现的受控实验成为可能。通过该平台,我们发现观测支架对训练成功的影响比强化学习算法的选择更具决定性,其中图像描述与游戏规则甚至决定了学习能否成功。跨领域迁移实验进一步表明,多样化任务类别的训练能实现广泛泛化,而狭窄训练可能导致负迁移,且多轮交互会放大所有这些效应。Gym-V已作为便捷的训练环境与评估工具包基础发布,旨在加速未来关于智能视觉语言模型的研究。