Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($π$-Flow). $π$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $\ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $π$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256$^2$, it attains a 1-NFE FID of 2.85, outperforming previous 1-NFE models of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, $π$-Flow achieves substantially better diversity than state-of-the-art DMD models, while maintaining teacher-level quality.
翻译:少步扩散或基于流的生成模型通常将预测速度的教师模型蒸馏为预测去噪数据捷径的学生模型。这种格式不匹配导致了复杂的蒸馏过程,这些过程往往面临质量与多样性的权衡问题。为解决此问题,我们提出了基于策略的流模型(π-Flow)。π-Flow 通过修改学生流模型的输出层,使其能够在单个时间步预测一个无需网络计算的策略。该策略随后以可忽略的开销在未来的子步骤中生成动态流速度,从而在这些子步骤上实现快速且准确的常微分方程(ODE)积分,而无需额外的网络评估。为使策略的 ODE 轨迹与教师模型相匹配,我们引入了一种新颖的模仿蒸馏方法,该方法使用标准的 ℓ₂ 流匹配损失,沿着策略自身的轨迹将策略的速度与教师模型的速度对齐。通过简单地模仿教师模型的行为,π-Flow 实现了稳定且可扩展的训练,并避免了质量与多样性的权衡。在 ImageNet 256² 数据集上,该模型在 1-NFE 设置下取得了 2.85 的 FID 分数,优于先前采用相同 DiT 架构的 1-NFE 模型。在 FLUX.1-12B 和 Qwen-Image-20B 模型上,π-Flow 在 4 NFEs 设置下,相较于最先进的 DMD 模型,在保持教师模型级别质量的同时,实现了显著更优的多样性。