Generative diffusion models (DM) have been extensively utilized in image super-resolution (ISR). Most of the existing methods adopt the denoising loss from DDPMs for model optimization. We posit that introducing reward feedback learning to finetune the existing models can further improve the quality of the generated images. In this paper, we propose a timestep-aware training strategy with reward feedback learning. Specifically, in the initial denoising stages of ISR diffusion, we apply low-frequency constraints to super-resolution (SR) images to maintain structural stability. In the later denoising stages, we use reward feedback learning to improve the perceptual and aesthetic quality of the SR images. In addition, we incorporate Gram-KL regularization to alleviate stylization caused by reward hacking. Our method can be integrated into any diffusion-based ISR model in a plug-and-play manner. Experiments show that ISR diffusion models, when fine-tuned with our method, significantly improve the perceptual and aesthetic quality of SR images, achieving excellent subjective results. Code: https://github.com/sxpro/RFSR
翻译:生成式扩散模型(DM)在图像超分辨率(ISR)领域已得到广泛应用。现有方法大多采用DDPM的去噪损失进行模型优化。我们认为,引入奖励反馈学习对现有模型进行微调可以进一步提升生成图像的质量。本文提出一种结合奖励反馈学习的时间步感知训练策略。具体而言,在ISR扩散的初始去噪阶段,我们对超分辨率(SR)图像施加低频约束以保持结构稳定性;在后续去噪阶段,则采用奖励反馈学习来提升SR图像的感知质量与美学质量。此外,我们引入Gram-KL正则化以缓解奖励攻击导致的风格化问题。该方法能够以即插即用方式集成到任何基于扩散的ISR模型中。实验表明,采用本方法微调后的ISR扩散模型能显著提升SR图像的感知质量与美学质量,获得优异的主观评价结果。代码:https://github.com/sxpro/RFSR