Large reasoning models such as DeepSeek-R1 and their distilled variants achieve strong performance on complex reasoning tasks. Yet, distilling these models often demands large-scale data for supervised fine-tuning (SFT), motivating the pursuit of data-efficient training methods. To address this, we propose a skill-centric distillation framework that efficiently transfers reasoning ability to weaker models with two components: (1) Skill-based data selection, which prioritizes examples targeting the student model's weaker skills, and (2) Skill-aware fine-tuning, which encourages explicit skill decomposition during problem solving. With only 1,000 training examples selected from a 100K teacher-generated corpus, our method surpasses random SFT baselines by +1.6% on Qwen3-4B and +1.4% on Qwen3-8B across five mathematical reasoning benchmarks. Further analysis confirms that these gains concentrate on skills emphasized during training, highlighting the effectiveness of skill-centric training for efficient reasoning distillation.
翻译:大型推理模型(如DeepSeek-R1)及其蒸馏变体在复杂推理任务上展现出卓越性能。然而,蒸馏这些模型通常需要大规模数据进行监督微调(SFT),这促使研究者寻求数据高效的训练方法。为此,我们提出一种以技能为中心的蒸馏框架,通过两个核心组件高效地将推理能力迁移至较弱模型:(1)基于技能的数据选择机制,优先选取针对学生模型薄弱技能的训练样本;(2)技能感知的微调策略,在问题求解过程中显式促进技能分解。仅使用从10万条教师生成语料中筛选的1,000个训练样本,我们的方法在五项数学推理基准测试中,相较于随机SFT基线在Qwen3-4B模型上提升1.6%,在Qwen3-8B模型上提升1.4%。进一步分析表明,这些提升主要集中在训练过程中强调的技能维度,印证了以技能为中心的训练方法对高效推理蒸馏的有效性。