Aligning diffusion models with downstream objectives is essential for their practical applications. However, standard alignment methods often struggle with step generalization when directly applied to few-step diffusion models, leading to inconsistent performance across different denoising step scenarios. To address this, we introduce Stepwise Diffusion Policy Optimization (SDPO), a novel alignment method tailored for few-step diffusion models. Unlike prior approaches that rely on a single sparse reward from only the final step of each denoising trajectory for trajectory-level optimization, SDPO incorporates dense reward feedback at every intermediate step. By learning the differences in dense rewards between paired samples, SDPO facilitates stepwise optimization of few-step diffusion models, ensuring consistent alignment across all denoising steps. To promote stable and efficient training, SDPO introduces an online reinforcement learning framework featuring several novel strategies designed to effectively exploit the stepwise granularity of dense rewards. Experimental results demonstrate that SDPO consistently outperforms prior methods in reward-based alignment across diverse step configurations, underscoring its robust step generalization capabilities. Code is avaliable at https://github.com/ZiyiZhang27/sdpo.
翻译:将扩散模型与下游目标对齐对其实际应用至关重要。然而,标准对齐方法在直接应用于少步扩散模型时,常面临步数泛化困难,导致在不同去噪步数场景下性能不一致。为解决此问题,我们提出了步进扩散策略优化(SDPO),一种专为少步扩散模型设计的新型对齐方法。与先前方法依赖每条去噪轨迹仅最终步骤的单一稀疏奖励进行轨迹级优化不同,SDPO在每个中间步骤都融入了稠密奖励反馈。通过学习配对样本间稠密奖励的差异,SDPO促进了少步扩散模型的步进优化,确保所有去噪步骤间的一致对齐。为促进稳定高效训练,SDPO引入了一种在线强化学习框架,该框架包含若干旨在有效利用稠密奖励步进粒度的新策略。实验结果表明,在不同步数配置下,SDPO在基于奖励的对齐任务中均持续优于先前方法,突显了其稳健的步数泛化能力。代码发布于 https://github.com/ZiyiZhang27/sdpo。