We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution (SR). Specifically, by employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model, thereby preserving the generative prior and minimizing training cost. To remedy the loss of fidelity caused by the inherent stochasticity of diffusion models, we employ a controllable feature wrapping module that allows users to balance quality and fidelity by simply adjusting a scalar value during the inference process. Moreover, we develop a progressive aggregation sampling strategy to overcome the fixed-size constraints of pre-trained diffusion models, enabling adaptation to resolutions of any size. A comprehensive evaluation of our method using both synthetic and real-world benchmarks demonstrates its superiority over current state-of-the-art approaches. Code and models are available at https://github.com/IceClear/StableSR.
翻译:本文提出了一种新颖的方法,利用预训练文本到图像扩散模型中封装的先验知识进行盲超分辨率重建。具体而言,通过采用我们提出的时间感知编码器,可以在不改变预训练合成模型的情况下实现优异的复原效果,从而保留生成先验并最小化训练成本。为弥补扩散模型固有随机性导致的保真度损失,我们设计了可控特征封装模块,允许用户在推理过程中通过简单调整标量值来平衡生成质量与内容保真度。此外,我们开发了渐进聚合采样策略以克服预训练扩散模型的固定尺寸限制,使其能够适应任意分辨率的图像。通过对合成数据集和真实世界基准的全面评估,本方法在性能上显著优于当前最先进的技术方案。代码与模型已开源:https://github.com/IceClear/StableSR。