Recently, diffusion-based blind super-resolution (SR) methods have shown great ability to generate high-resolution images with abundant high-frequency detail, but the detail is often achieved at the expense of fidelity. Meanwhile, another line of research focusing on rectifying the reverse process of diffusion models (i.e., diffusion guidance), has demonstrated the power to generate high-fidelity results for non-blind SR. However, these methods rely on known degradation kernels, making them difficult to apply to blind SR. To address these issues, we present DADiff in this paper. DADiff incorporates degradation-aware models into the diffusion guidance framework, eliminating the need to know degradation kernels. Additionally, we propose two novel techniques: input perturbation and guidance scalar, to further improve our performance. Extensive experimental results show that our proposed method has superior performance over state-of-the-art methods on blind SR benchmarks.
翻译:近期,基于扩散模型的盲超分辨率方法在生成具有丰富高频细节的高分辨率图像方面展现出卓越能力,但细节的增强往往以牺牲保真度为代价。与此同时,另一类专注于校正扩散模型反向过程(即扩散引导)的研究,已在非盲超分辨率任务中证明了其生成高保真结果的能力。然而,这些方法依赖于已知的退化核,使其难以应用于盲超分辨率场景。为解决这些问题,本文提出DADiff方法。DADiff将退化感知模型融入扩散引导框架,从而无需已知退化核。此外,我们提出了两种新技术:输入扰动和引导标量,以进一步提升性能。大量实验结果表明,所提方法在盲超分辨率基准测试中优于现有最先进方法。