Since the advent of reasoning-based large language models, many have found great success from distilling reasoning capabilities into student models. Such techniques have significantly bridged the gap between reasoning and standard LLMs on coding tasks. Despite this, much of the progress on distilling reasoning models remains locked behind proprietary datasets or lacks details on data curation, filtering and subsequent training. To address this, we construct a superior supervised fine-tuning (SFT) dataset that we use to achieve state-of-the-art coding capability results in models of various sizes. Our distilled models use only SFT to achieve 61.8% on LiveCodeBench and 24.6% on CodeContests, surpassing alternatives trained with reinforcement learning. We then perform analysis on the data sources used to construct our dataset, the impact of code execution filtering, and the importance of instruction/solution diversity. We observe that execution filtering negatively affected benchmark accuracy, leading us to prioritize instruction diversity over solution correctness. Finally, we also analyze the token efficiency and reasoning patterns utilized by these models. We will open-source these datasets and distilled models to the community.
翻译:自基于推理的大语言模型问世以来,许多研究通过将推理能力蒸馏至学生模型中取得了显著成功。此类技术显著缩小了推理模型与标准大语言模型在编程任务上的性能差距。尽管如此,当前多数推理模型蒸馏的进展仍受限于专有数据集,或缺乏关于数据构建、筛选及后续训练的详细说明。为解决这一问题,我们构建了一个优质的监督微调数据集,并利用该数据集在不同规模模型中实现了最先进的编程能力。我们的蒸馏模型仅通过监督微调就在LiveCodeBench上达到61.8%的准确率,在CodeContests上达到24.6%的准确率,超越了使用强化学习训练的替代方案。我们进一步分析了数据集构建所用的数据源、代码执行筛选的影响以及指令/解决方案多样性的重要性。研究发现,执行筛选对基准准确率产生了负面影响,这促使我们优先考虑指令多样性而非解决方案正确性。最后,我们还分析了这些模型使用的令牌效率与推理模式。我们将向社区开源这些数据集与蒸馏模型。