Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors. However, these methods still face two challenges: the requirement for dozens of sampling steps to achieve satisfactory results, which limits efficiency in real scenarios, and the neglect of degradation models, which are critical auxiliary information in solving the SR problem. In this work, we introduced a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods. Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR, which corrects the model parameters based on the pre-estimated degradation information from low-resolution images. This module not only facilitates a powerful data-dependent or degradation-dependent SR model but also preserves the generative prior of the pre-trained diffusion model as much as possible. Furthermore, we tailor a novel training pipeline by introducing an online negative sample generation strategy. Combined with the classifier-free guidance strategy during inference, it largely improves the perceptual quality of the super-resolution results. Extensive experiments have demonstrated the superior efficiency and effectiveness of the proposed model compared to recent state-of-the-art methods.
翻译:基于扩散模型的图像超分辨率方法通过利用大规模预训练文生图扩散模型作为先验,已取得显著成功。然而,这些方法仍面临两大挑战:一是需要数十步采样才能获得满意结果,限制了实际场景中的效率;二是忽略了退化模型这一解决超分辨率问题的关键辅助信息。本工作提出了一种新颖的单步超分辨率模型,显著改善了基于扩散的超分辨率方法的效率问题。与现有微调策略不同,我们专门针对超分辨率任务设计了退化引导的低秩自适应模块,该模块根据从低分辨率图像预估计的退化信息修正模型参数。该模块不仅能构建强大的数据依赖或退化依赖超分辨率模型,还能最大限度保留预训练扩散模型的生成先验。此外,我们通过引入在线负样本生成策略,定制了新颖的训练流程。结合推理阶段的无分类器引导策略,大幅提升了超分辨率结果的感知质量。大量实验证明,相较于当前最先进方法,所提模型在效率与性能方面均展现出显著优势。