Diffusion models for super-resolution (SR) produce high-quality visual results but require expensive computational costs. Despite the development of several methods to accelerate diffusion-based SR models, some (e.g., SinSR) fail to produce realistic perceptual details, while others (e.g., OSEDiff) may hallucinate non-existent structures. To overcome these issues, we present RSD, a new distillation method for ResShift, one of the top diffusion-based SR models. Our method is based on training the student network to produce such images that a new fake ResShift model trained on them will coincide with the teacher model. RSD achieves single-step restoration and outperforms the teacher by a large margin. We show that our distillation method can surpass the other distillation-based method for ResShift - SinSR - making it on par with state-of-the-art diffusion-based SR distillation methods. Compared to SR methods based on pre-trained text-to-image models, RSD produces competitive perceptual quality, provides images with better alignment to degraded input images, and requires fewer parameters and GPU memory. We provide experimental results on various real-world and synthetic datasets, including RealSR, RealSet65, DRealSR, ImageNet, and DIV2K.
翻译:用于超分辨率(SR)的扩散模型能够生成高质量视觉结果,但需要高昂的计算成本。尽管已有多种方法加速基于扩散的超分辨率模型,其中部分方法(如SinSR)无法生成真实的感知细节,而其他方法(如OSEDiff)可能出现虚构的非真实结构。为解决这些问题,我们提出RSD——一种针对顶级扩散超分辨率模型ResShift的新型蒸馏方法。该方法基于训练学生网络生成特定图像,使得在这些图像上训练的新伪ResShift模型能够与教师模型保持一致。RSD实现了单步图像复原,并大幅超越教师模型性能。实验表明,我们的蒸馏方法能够超越ResShift的其他蒸馏方法(如SinSR),使其达到当前最先进的基于扩散的超分辨率蒸馏方法水平。与基于预训练文生图模型的超分辨率方法相比,RSD在感知质量上具有竞争力,生成的图像与退化输入图像对齐性更佳,且所需参数量和GPU内存更少。我们在多种真实场景与合成数据集上提供了实验结果,包括RealSR、RealSet65、DRealSR、ImageNet和DIV2K。