On-policy distillation (OPD), which aligns the student with the teacher's logit distribution on student-generated trajectories, has demonstrated strong empirical gains in improving student performance and often outperforms off-policy distillation and reinforcement learning (RL) paradigms. In this work, we first theoretically show that OPD is a special case of dense KL-constrained RL where the reward function and the KL regularization are always weighted equally and the reference model can by any model. Then, we propose the Generalized On-Policy Distillation (G-OPD) framework, which extends the standard OPD objective by introducing a flexible reference model and a reward scaling factor that controls the relative weight of the reward term against the KL regularization. Through comprehensive experiments on math reasoning and code generation tasks, we derive two novel insights: (1) Setting the reward scaling factor to be greater than 1 (i.e., reward extrapolation), which we term ExOPD, consistently improves over standard OPD across a range of teacher-student size pairings. In particular, in the setting where we merge the knowledge from different domain experts, obtained by applying domain-specific RL to the same student model, back into the original student, ExOPD enables the student to even surpass the teacher's performance boundary and outperform the domain teachers. (2) Building on ExOPD, we further find that in the strong-to-weak distillation setting (i.e., distilling a smaller student from a larger teacher), performing reward correction by choosing the reference model as the teacher's base model before RL yields a more accurate reward signal and further improves distillation performance. However, this choice assumes access to the teacher's pre-RL variant and incurs more computational overhead. We hope our work offers new insights for future research on OPD.
翻译:在线策略蒸馏(OPD)通过使学生在自身生成轨迹上与教师的逻辑分布对齐,已在提升学生模型性能方面展现出显著的实证优势,其表现通常优于离线策略蒸馏与强化学习(RL)范式。本文首先从理论上证明,OPD是密集KL约束强化学习的一种特例,其中奖励函数与KL正则化始终被等权重处理,且参考模型可为任意模型。基于此,我们提出广义在线策略蒸馏(G-OPD)框架,该框架通过引入灵活的参考模型及控制奖励项相对于KL正则化权重的奖励缩放因子,扩展了标准OPD目标。通过在数学推理与代码生成任务上的系统实验,我们获得两项新发现:(1)将奖励缩放因子设置为大于1(即奖励外推),我们称之为ExOPD,该策略在一系列教师-学生规模配对中均持续优于标准OPD。特别是在将通过对同一学生模型应用领域特定RL获得的不同领域专家知识融合回原始学生的场景中,ExOPD使学生模型能够突破教师性能边界,并超越各领域教师。(2)在ExOPD基础上,我们进一步发现,在强到弱蒸馏场景(即从较大教师模型蒸馏较小学生模型)中,通过选择教师RL训练前的基模型作为参考模型进行奖励校正,可提供更精确的奖励信号并进一步提升蒸馏性能。但该选择需以获取教师RL前变体为前提,且会带来更高计算开销。我们希望本研究能为未来OPD相关研究提供新的启示。