Few-step diffusion or flow-based generative models typically distill a velocity-predicting teacher into a student that predicts a shortcut towards denoised data. This format mismatch has led to complex distillation procedures that often suffer from a quality-diversity trade-off. To address this, we propose policy-based flow models ($\pi$-Flow). $\pi$-Flow modifies the output layer of a student flow model to predict a network-free policy at one timestep. The policy then produces dynamic flow velocities at future substeps with negligible overhead, enabling fast and accurate ODE integration on these substeps without extra network evaluations. To match the policy's ODE trajectory to the teacher's, we introduce a novel imitation distillation approach, which matches the policy's velocity to the teacher's along the policy's trajectory using a standard $\ell_2$ flow matching loss. By simply mimicking the teacher's behavior, $\pi$-Flow enables stable and scalable training and avoids the quality-diversity trade-off. On ImageNet 256$^2$, it attains a 1-NFE FID of 2.85, outperforming MeanFlow of the same DiT architecture. On FLUX.1-12B and Qwen-Image-20B at 4 NFEs, $\pi$-Flow achieves substantially better diversity than state-of-the-art few-step methods, while maintaining teacher-level quality.
翻译:少步扩散或基于流的生成模型通常将预测速度的教师模型蒸馏为预测去噪数据捷径的学生模型。这种格式不匹配导致了复杂的蒸馏过程,常面临质量与多样性之间的权衡。为解决此问题,我们提出基于策略的流模型($\pi$-Flow)。$\pi$-Flow通过修改学生流模型的输出层,使其在单个时间步预测一个无需网络的策略。该策略随后以可忽略的开销在未来子步中生成动态流速度,从而无需额外网络评估即可在这些子步上实现快速准确的常微分方程(ODE)积分。为使策略的ODE轨迹与教师模型匹配,我们引入了一种新颖的模仿蒸馏方法,该方法使用标准的$\ell_2$流匹配损失,沿策略轨迹将策略的速度与教师模型的速度对齐。通过简单模仿教师模型的行为,$\pi$-Flow实现了稳定且可扩展的训练,并避免了质量-多样性权衡问题。在ImageNet 256$^2$数据集上,其1-NFE FID达到2.85,优于相同DiT架构的MeanFlow。在FLUX.1-12B和Qwen-Image-20B模型上,$\pi$-Flow在4 NFEs条件下,在保持教师模型级别质量的同时,其多样性显著优于当前最先进的少步生成方法。