Most progress in recent coder models has been driven by supervised fine-tuning (SFT), while the potential of reinforcement learning (RL) remains largely unexplored, primarily due to the lack of reliable reward data/model in the code domain. In this paper, we address this challenge by leveraging automated large-scale test-case synthesis to enhance code model training. Specifically, we design a pipeline that generates extensive (question, test-cases) pairs from existing code data. Using these test cases, we construct preference pairs based on pass rates over sampled programs to train reward models with Bradley-Terry loss. It shows an average of 10-point improvement for Llama-3.1-8B-Ins and 5-point improvement for Qwen2.5-Coder-7B-Ins through best-of-32 sampling, making the 7B model on par with 236B DeepSeek-V2.5. Furthermore, we conduct reinforcement learning with both reward models and test-case pass rewards, leading to consistent improvements across HumanEval, MBPP, BigCodeBench, and LiveCodeBench (V4). Notably, we follow the R1-style training to start from Qwen2.5-Coder-base directly and show that our RL training can improve model on HumanEval-plus by over 25\% and MBPP-plus by 6\% for merely 80 optimization steps. We believe our results highlight the huge potential of reinforcement learning in coder models.
翻译:近期编码器模型的进展大多由监督微调驱动,而强化学习的潜力在很大程度上仍未得到充分探索,这主要归因于代码领域缺乏可靠的奖励数据或模型。本文通过利用自动化大规模测试用例合成来增强代码模型训练,以应对这一挑战。具体而言,我们设计了一个从现有代码数据中生成大量(问题,测试用例)对的流程。利用这些测试用例,我们基于采样程序的通过率构建偏好对,并通过Bradley-Terry损失训练奖励模型。实验表明,通过最佳32采样,Llama-3.1-8B-Ins平均提升了10分,Qwen2.5-Coder-7B-Ins提升了5分,使得7B模型的表现与236B的DeepSeek-V2.5相当。此外,我们结合奖励模型和测试用例通过奖励进行强化学习,在HumanEval、MBPP、BigCodeBench和LiveCodeBench(V4)上均取得了持续改进。值得注意的是,我们遵循R1风格训练,直接从Qwen2.5-Coder-base模型开始,结果显示仅经过80步优化,我们的强化学习训练即可将模型在HumanEval-plus上的性能提升超过25%,在MBPP-plus上提升6%。我们相信这些结果凸显了强化学习在编码器模型中的巨大潜力。