We present ControlSR, a new method that can tame Diffusion Models for consistent real-world image super-resolution (Real-ISR). Previous Real-ISR models mostly focus on how to activate more generative priors of text-to-image diffusion models to make the output high-resolution (HR) images look better. However, since these methods rely too much on the generative priors, the content of the output images is often inconsistent with the input LR ones. To mitigate the above issue, in this work, we tame Diffusion Models by effectively utilizing LR information to impose stronger constraints on the control signals from ControlNet in the latent space. We show that our method can produce higher-quality control signals, which enables the super-resolution results to be more consistent with the LR image and leads to clearer visual results. In addition, we also propose an inference strategy that imposes constraints in the latent space using LR information, allowing for the simultaneous improvement of fidelity and generative ability. Experiments demonstrate that our model can achieve better performance across multiple metrics on several test sets and generate more consistent SR results with LR images than existing methods. Our code is available at https://github.com/HVision-NKU/ControlSR.
翻译:本文提出ControlSR,一种能够有效利用扩散模型实现一致的真实世界图像超分辨率(Real-ISR)的新方法。现有的Real-ISR模型主要关注如何激活文生图扩散模型中更多的生成先验,以使输出的高分辨率(HR)图像视觉效果更佳。然而,由于这些方法过度依赖生成先验,输出图像的内容常与输入的低分辨率(LR)图像不一致。为缓解上述问题,本研究通过有效利用LR信息,在潜空间中对ControlNet的控制信号施加更强约束,从而实现对扩散模型的调控。我们证明,本方法能够生成更高质量的控制信号,使超分辨率结果与LR图像更一致,并产生更清晰的视觉效果。此外,我们还提出一种在潜空间中利用LR信息施加约束的推理策略,可同时提升保真度与生成能力。实验表明,相较于现有方法,本模型在多个测试集上各项指标表现更优,且能生成与LR图像更一致的超分辨率结果。代码已开源:https://github.com/HVision-NKU/ControlSR。