Online reinforcement learning (RL) has been central to post-training language models, but its extension to diffusion models remains challenging due to intractable likelihoods. Recent works discretize the reverse sampling process to enable GRPO-style training, yet they inherit fundamental drawbacks, including solver restrictions, forward-reverse inconsistency, and complicated integration with classifier-free guidance (CFG). We introduce Diffusion Negative-aware FineTuning (DiffusionNFT), a new online RL paradigm that optimizes diffusion models directly on the forward process via flow matching. DiffusionNFT contrasts positive and negative generations to define an implicit policy improvement direction, naturally incorporating reinforcement signals into the supervised learning objective. This formulation enables training with arbitrary black-box solvers, eliminates the need for likelihood estimation, and requires only clean images rather than sampling trajectories for policy optimization. DiffusionNFT is up to $25\times$ more efficient than FlowGRPO in head-to-head comparisons, while being CFG-free. For instance, DiffusionNFT improves the GenEval score from 0.24 to 0.98 within 1k steps, while FlowGRPO achieves 0.95 with over 5k steps and additional CFG employment. By leveraging multiple reward models, DiffusionNFT significantly boosts the performance of SD3.5-Medium in every benchmark tested.
翻译:在线强化学习(RL)已成为语言模型后训练的核心方法,但其向扩散模型的扩展因似然函数难以处理而面临挑战。近期研究通过离散化反向采样过程以实现GRPO风格训练,但这些方法仍存在固有缺陷,包括求解器限制、前向-反向过程不一致性,以及与无分类器引导(CFG)的复杂集成。本文提出扩散负感知微调(DiffusionNFT),这是一种基于流匹配、直接在前向过程上优化扩散模型的新型在线RL范式。DiffusionNFT通过对比正负样本来定义隐式策略改进方向,将强化信号自然地融入监督学习目标。该范式支持使用任意黑盒求解器进行训练,无需进行似然估计,且策略优化仅需干净图像而非采样轨迹。在直接对比中,DiffusionNFT的效率比FlowGRPO提升高达$25\times$,同时无需CFG。例如,DiffusionNFT在1k步内将GenEval分数从0.24提升至0.98,而FlowGRPO需要超过5k步并额外使用CFG才能达到0.95。通过利用多个奖励模型,DiffusionNFT在各项基准测试中显著提升了SD3.5-Medium的性能。