Improving the reasoning capabilities of large language models (LLMs) typically relies either on the model's ability to sample a correct solution to be reinforced or on the existence of a stronger model able to solve the problem. However, many difficult problems remain intractable for even current frontier models, preventing the extraction of valid training signals. A promising alternative is to leverage high-quality expert human solutions, yet naive imitation of this data fails because it is fundamentally out of distribution: expert solutions are typically didactic, containing implicit reasoning gaps intended for human readers rather than computational models. Furthermore, high-quality expert solutions are expensive, necessitating generalizable sample-efficient training methods. We propose Distribution Aligned Imitation Learning (DAIL), a two-step method that bridges the distributional gap by first transforming expert solutions into detailed, in-distribution reasoning traces and then applying a contrastive objective to focus learning on expert insights and methodologies. We find that DAIL can leverage fewer than 1000 high-quality expert solutions to achieve 10-25% pass@k gains on Qwen2.5-Instruct and Qwen3 models, improve reasoning efficiency by 2x to 4x, and enable out-of-domain generalization.
翻译:提升大型语言模型(LLM)的推理能力通常依赖于模型采样正确解答以进行强化的能力,或依赖于存在能够解决问题的更强模型。然而,许多难题即使对当前前沿模型而言仍难以求解,从而阻碍了有效训练信号的提取。一种有前景的替代方案是利用高质量的专家人工解答,但直接模仿这类数据往往失败,因为其本质上处于分布外:专家解答通常是教学式的,包含隐含的推理跳跃,这些跳跃是为人类读者而非计算模型设计的。此外,高质量的专家解答成本高昂,因此需要具有泛化能力且样本高效训练方法。我们提出了分布对齐模仿学习(DAIL),这是一种两步方法:首先将专家解答转化为详细、分布内的推理轨迹以弥合分布差距,然后应用对比目标使学习聚焦于专家的洞见和方法论。我们发现,DAIL能够利用少于1000个高质量专家解答,在Qwen2.5-Instruct和Qwen3模型上实现10-25%的pass@k提升,将推理效率提高2至4倍,并实现领域外泛化。