Recent progress, such as DeepSeek-R1, has shown that the GRPO algorithm, a Reinforcement Learning (RL) approach, can effectively train Chain-of-Thought (CoT) reasoning in Large Language Models (LLMs) and Vision-Language Models (VLMs). In this paper, we analyze three challenges of GRPO: gradient coupling between thoughts and answers, sparse reward signals caused by limited parallel sampling, and unstable advantage estimation. To mitigate these challenges, we propose GRPO-MA, a simple yet theoretically grounded method that leverages multi-answer generation from each thought process, enabling more robust and efficient optimization. Theoretically, we show that the variance of thought advantage decreases as the number of answers per thought increases. Empirically, our gradient analysis confirms this effect, showing that GRPO-MA reduces gradient spikes compared to GRPO. Experiments on math, code, and diverse multimodal tasks demonstrate that GRPO-MA substantially improves performance and training efficiency. Our ablation studies further reveal that increasing the number of answers per thought consistently enhances model performance.
翻译:近期进展,如DeepSeek-R1,表明GRPO算法作为一种强化学习方法,能够有效训练大型语言模型和视觉语言模型中的思维链推理。本文分析了GRPO面临的三个挑战:思维与答案之间的梯度耦合、有限并行采样导致的稀疏奖励信号,以及不稳定的优势估计。为缓解这些挑战,我们提出了GRPO-MA,这是一种简单但具有理论依据的方法,利用每个思维过程生成多个答案,从而实现更鲁棒和高效的优化。理论上,我们证明思维优势的方差随着每个思维生成答案数量的增加而减小。实证中,我们的梯度分析证实了这一效应,表明与GRPO相比,GRPO-MA减少了梯度尖峰。在数学、代码及多样化多模态任务上的实验表明,GRPO-MA显著提升了性能和训练效率。消融研究进一步揭示,增加每个思维生成的答案数量持续增强模型性能。