Implicit degradation modeling-based blind super-resolution (SR) has attracted more increasing attention in the community due to its excellent generalization to complex degradation scenarios and wide application range. How to extract more discriminative degradation representations and fully adapt them to specific image features is the key to this task. In this paper, we propose a new Content-decoupled Contrastive Learning-based blind image super-resolution (CdCL) framework following the typical blind SR pipeline. This framework introduces negative-free contrastive learning technique for the first time to model the implicit degradation representation, in which a new cyclic shift sampling strategy is designed to ensure decoupling between content features and degradation features from the data perspective, thereby improving the purity and discriminability of the learned implicit degradation space. In addition, we propose a detail-aware implicit degradation adapting module that can better adapt degradation representations to specific LR features by enhancing the basic adaptation unit's perception of image details, significantly reducing the overall SR model complexity. Extensive experiments on synthetic and real data show that our method achieves highly competitive quantitative and qualitative results in various degradation settings while obviously reducing parameters and computational costs, validating the feasibility of designing practical and lightweight blind SR tools.
翻译:基于隐式退化建模的盲超分辨率技术因其对复杂退化场景的优异泛化能力和广泛的应用范围,在学界受到越来越多的关注。如何提取更具判别力的退化表征,并使其充分适应特定图像特征,是该任务的关键。本文遵循典型的盲超分辨率流程,提出了一种新的基于内容解耦对比学习的盲图像超分辨率框架。该框架首次引入无负样本对比学习技术来建模隐式退化表征,其中设计了一种新的循环移位采样策略,从数据角度确保内容特征与退化特征之间的解耦,从而提升所学隐式退化空间的纯净度与判别力。此外,我们提出了一种细节感知的隐式退化自适应模块,通过增强基础自适应单元对图像细节的感知能力,使退化表征能更好地适应特定的低分辨率图像特征,同时显著降低了整体超分辨率模型的复杂度。在合成数据与真实数据上的大量实验表明,我们的方法在多种退化设置下均取得了极具竞争力的定量与定性结果,同时明显减少了参数量与计算成本,验证了设计实用且轻量级盲超分辨率工具的可行性。