Recent advances in reinforcement learning (RL) have strengthened the reasoning capabilities of vision-language models (VLMs). However, enhancing policy exploration to better scale test-time compute remains largely underexplored. In addition, VLMs continue to struggle with imperfect visual perception, which in turn affects the subsequent reasoning process. We introduce NoisyRollout, a simple yet effective data augmentation method that addresses these issues by mixing training trajectories from both clean and moderately distorted images. This approach injects perceptual diversity, encouraging better policy exploration and leading to more robust reasoning. A noise annealing schedule gradually reduces distortion strength, aiding exploration early in training while ensuring later stability. Crucially, our method is easy-to-adopt--requiring no additional training cost and no modifications to the RL objective. Extensive experiments on 2 distinct training datasets demonstrate that NoisyRollout achieves state-of-the-art performance among open-source RL-tuned models across 5 out-of-domain reasoning and perception benchmarks. Furthermore, we validate the effectiveness of NoisyRollout across model sizes (7B and 32B), data scales (from 1K to 6K) and image augmentation types (Gaussion noise and rotation), highlighting its generalizability and scalability.
翻译:强化学习(RL)的最新进展增强了视觉-语言模型(VLM)的推理能力。然而,如何增强策略探索以更好地扩展测试时计算资源,目前仍未得到充分研究。此外,VLM 在处理不完美的视觉感知方面仍存在困难,这进而影响后续的推理过程。我们提出了 NoisyRollout,一种简单而有效的数据增强方法,通过混合来自干净图像和适度失真图像的训练轨迹来解决这些问题。该方法注入了感知多样性,促进了更好的策略探索,并实现了更稳健的推理。噪声退火调度策略逐步降低失真强度,有助于训练早期的探索,同时确保后期的稳定性。关键的是,我们的方法易于采用——无需额外的训练成本,也无需修改 RL 目标。在两个不同的训练数据集上进行的大量实验表明,NoisyRollout 在 5 个领域外推理和感知基准测试中,在开源 RL 调优模型中实现了最先进的性能。此外,我们验证了 NoisyRollout 在不同模型规模(7B 和 32B)、数据规模(从 1K 到 6K)和图像增强类型(高斯噪声和旋转)下的有效性,突显了其普适性和可扩展性。