Specialized reasoning language models (RLMs) have demonstrated that scaling test-time computation through detailed reasoning traces significantly enhances performance. Although these traces effectively facilitate knowledge distillation into smaller, instruction-tuned models, the precise nature of transferred reasoning remains unclear. In this study, we investigate to what extent distilled models internalize replicated stylistic patterns during reasoning. To this end, we systematically analyze reasoning traces, identifying structural and lexical patterns that characterize successful reasoning. We then introduce two new datasets -- a dataset of emergent reasoning traces and a synthetic dataset explicitly constructed to replicate these stylistic patterns -- to precisely examine their influence on distilled models' reasoning capabilities. We find that models trained on the synthetic traces achieve comparable performance, indicating that distilled reasoning abilities rely significantly on surface-level patterns. Surprisingly, we observe an increase in performance even when the synthetic traces are altered to lead to the wrong answer. Our findings highlight how stylistic patterns can be leveraged to efficiently enhance LM reasoning across diverse model families.
翻译:专用推理语言模型(RLMs)已证明,通过详细的推理轨迹扩展测试时计算能显著提升性能。尽管这些轨迹能有效促进知识蒸馏至更小的指令调优模型,但所传递推理的确切性质仍不明确。在本研究中,我们探究了蒸馏模型在推理过程中内化复制风格模式的程度。为此,我们系统分析了推理轨迹,识别了表征成功推理的结构与词汇模式。随后,我们引入了两个新数据集——一个新兴推理轨迹数据集和一个为显式复制这些风格模式而构建的合成数据集——以精确检验它们对蒸馏模型推理能力的影响。我们发现,在合成轨迹上训练的模型取得了相当的性能,这表明蒸馏的推理能力在很大程度上依赖于表层模式。令人惊讶的是,即使合成轨迹被修改为导致错误答案,我们仍观察到性能提升。我们的研究结果突显了如何利用风格模式来高效增强跨不同模型家族的语言模型推理能力。