Enhancing the reasoning capabilities of large language models (LLMs) typically relies on massive computational resources and extensive datasets, limiting accessibility for resource-constrained settings. Our study investigates the potential of reinforcement learning (RL) to improve reasoning in small LLMs, focusing on a 1.5-billion-parameter model, DeepSeek-R1-Distill-Qwen-1.5B, under strict constraints: training on 4 NVIDIA A40 GPUs (48 GB VRAM each) within 24 hours. Adapting the Group Relative Policy Optimization (GRPO) algorithm and curating a compact, high-quality mathematical reasoning dataset, we conducted three experiments to explore model behavior and performance. Our results demonstrate rapid reasoning gains - e.g., AMC23 accuracy rising from 63% to 80% and AIME24 reaching 46.7%, surpassing o1-preview - using only 7,000 samples and a $42 training cost, compared to thousands of dollars for baseline models. However, challenges such as optimization instability and length constraints emerged with prolonged training. These findings highlight the efficacy of RL-based fine-tuning for small LLMs, offering a cost-effective alternative to large-scale approaches. We release our code and datasets as open-source resources, providing insights into trade-offs and laying a foundation for scalable, reasoning-capable LLMs in resource-limited environments. All are available at https://github.com/knoveleng/open-rs.
翻译:提升大语言模型的推理能力通常依赖于海量计算资源和庞大数据集,这在资源受限的环境中限制了其可及性。本研究探讨了强化学习在增强小规模大语言模型推理能力方面的潜力,重点关注一个15亿参数的模型——DeepSeek-R1-Distill-Qwen-1.5B,并在严格约束条件下进行实验:使用4块NVIDIA A40 GPU(每块显存48 GB)在24小时内完成训练。通过适配分组相对策略优化算法并构建一个精简的高质量数学推理数据集,我们开展了三项实验以探究模型行为与性能。实验结果表明,仅使用7,000个样本和42美元的训练成本(基线模型通常需耗费数千美元),模型即实现了快速的推理能力提升——例如在AMC23数据集上的准确率从63%上升至80%,在AIME24上达到46.7%,超越了o1-preview模型。然而,随着训练时间延长,优化不稳定性和生成长度限制等挑战也逐渐显现。这些发现凸显了基于强化学习的微调方法对小规模大语言模型的有效性,为大规模方法提供了一种经济高效的替代方案。我们已将代码和数据集作为开源资源发布,深入揭示了其中的权衡关系,并为在资源有限环境中构建可扩展、具备推理能力的大语言模型奠定了基础。所有资源可通过 https://github.com/knoveleng/open-rs 获取。