While Large Language Models (LLMs) have emerged with remarkable capabilities in complex tasks through Chain-of-Thought reasoning, practical resource constraints have sparked interest in transferring these abilities to smaller models. However, achieving both domain performance and cross-domain generalization remains challenging. Existing approaches typically restrict students to following a single golden rationale and treat different reasoning paths independently. Due to distinct inductive biases and intrinsic preferences, alongside the student's evolving capacity and reasoning preferences during training, a teacher's "optimal" rationale could act as out-of-distribution noise. This misalignment leads to a degeneration of the student's latent reasoning distribution, causing suboptimal performance. To bridge this gap, we propose MIND, a capability-adaptive framework that transitions distillation from passive mimicry to active cognitive construction. We synthesize diverse teacher perspectives through a novel "Teaching Assistant" network. By employing a Feedback-Driven Inertia Calibration mechanism, this network utilizes inertia-filtered training loss to align supervision with the student's current adaptability, effectively enhancing performance while mitigating catastrophic forgetting. Extensive experiments demonstrate that MIND achieves state-of-the-art performance on both in-distribution and out-of-distribution benchmarks, and our sophisticated latent space analysis further confirms the mechanism of reasoning ability internalization.
翻译:尽管大型语言模型(LLM)通过思维链推理在复杂任务中展现出卓越能力,但实际资源限制促使研究者关注如何将这些能力迁移至更小的模型。然而,同时实现领域内性能与跨领域泛化仍具挑战。现有方法通常限制学生模型仅遵循单一黄金推理路径,并将不同推理路径视为独立样本。由于学生模型与教师模型存在不同的归纳偏好与内在特性,加之训练过程中学生模型的能力与推理偏好持续演变,教师模型的“最优”推理路径可能成为分布外噪声。这种错配会导致学生模型潜在推理分布退化,从而产生次优性能。为弥合这一差距,我们提出MIND——一种能力自适应的蒸馏框架,将蒸馏过程从被动模仿转变为主动认知构建。我们通过新颖的“教学助理”网络综合多样化的教师视角。该网络采用反馈驱动的惯性校准机制,利用经惯性筛选的训练损失使监督信号与学生当前适应能力对齐,在有效提升性能的同时缓解灾难性遗忘。大量实验表明,MIND在分布内与分布外基准测试中均达到最先进性能,我们深入的潜在空间分析进一步验证了推理能力内化的机制。