Test-time scaling methods have seen a rapid increase in popularity for its computational efficiency and parameter-independent training to improve reasoning performance on Large Language Models. One such method is called budget forcing, a decoding intervention strategy which allocates extra compute budget for thinking and elicits the inherent self-correcting behavior of the model. However, this relies on supervised fine-tuning (SFT) on long-context reasoning traces which causes performance degradation on smaller models due to verbose responses. For this reason, we offer a framework integrating reinforcement learning (RL) to improve token efficiency and boost the performance of a 1.5B model for mathematical reasoning. We demonstrate this using only 1.5K training samples and found that our SFT+RL model performed better on the GSM8K dataset with varying compute budgets. Our main findings showed an overall higher accuracy while significantly reducing its token usage by over 40% compared to the SFT model, revealing how RL can recover the losses due to long-context training and altogether improving performance in mathematical reasoning.
翻译:测试时缩放方法因其计算效率高且参数无关的训练方式,在大语言模型推理性能提升方面迅速普及。其中一种方法称为预算强制,这是一种解码干预策略,通过分配额外的计算资源用于思考过程,从而激发模型固有的自我纠正能力。然而,该方法依赖于对长上下文推理轨迹进行监督微调,这会导致较小模型因生成冗长响应而出现性能下降。为此,我们提出了一个集成强化学习的框架,旨在提升标记效率并增强一个15亿参数模型在数学推理任务中的表现。我们仅使用1.5K训练样本进行验证,发现我们的监督微调+强化学习模型在不同计算预算下,在GSM8K数据集上均表现出更优性能。主要研究结果表明,与纯监督微调模型相比,该方案在实现整体更高准确率的同时,显著降低了超过40%的标记使用量,这揭示了强化学习如何能够弥补长上下文训练带来的损失,并全面提升数学推理性能。