The problem of blind image super-resolution aims to recover high-resolution (HR) images from low-resolution (LR) images with unknown degradation modes. Most existing methods model the image degradation process using blur kernels. However, this explicit modeling approach struggles to cover the complex and varied degradation processes encountered in the real world, such as high-order combinations of JPEG compression, blur, and noise. Implicit modeling for the degradation process can effectively overcome this issue, but a key challenge of implicit modeling is the lack of accurate ground truth labels for the degradation process to conduct supervised training. To overcome this limitations inherent in implicit modeling, we propose an \textbf{U}ncertainty-based degradation representation for blind \textbf{S}uper-\textbf{R}esolution framework (\textbf{USR}). By suppressing the uncertainty of local degradation representations in images, USR facilitated self-supervised learning of degradation representations. The USR consists of two components: Adaptive Uncertainty-Aware Degradation Extraction (AUDE) and a feature extraction network composed of Variable Depth Dynamic Convolution (VDDC) blocks. To extract Uncertainty-based Degradation Representation from LR images, the AUDE utilizes the Self-supervised Uncertainty Contrast module with Uncertainty Suppression Loss to suppress the inherent model uncertainty of the Degradation Extractor. Furthermore, VDDC block integrates degradation information through dynamic convolution. Rhe VDDC also employs an Adaptive Intensity Scaling operation that adaptively adjusts the degradation representation according to the network hierarchy, thereby facilitating the effective integration of degradation information. Quantitative and qualitative experiments affirm the superiority of our approach.
翻译:盲图像超分辨率问题旨在从具有未知退化模式的低分辨率图像中恢复高分辨率图像。现有方法大多使用模糊核来建模图像退化过程。然而,这种显式建模方法难以覆盖现实世界中复杂多变的退化过程,例如JPEG压缩、模糊和噪声的高阶组合。对退化过程进行隐式建模可以有效克服这一问题,但隐式建模的关键挑战在于缺乏退化过程的准确真实标签来进行监督训练。为克服隐式建模固有的这一局限性,我们提出了一种基于不确定性的退化表示盲超分辨率框架。通过抑制图像中局部退化表示的不确定性,该框架促进了退化表示的自监督学习。该框架包含两个组件:自适应不确定性感知退化提取器以及由可变深度动态卷积块组成的特征提取网络。为从低分辨率图像中提取基于不确定性的退化表示,自适应不确定性感知退化提取器利用具有不确定性抑制损失的自监督不确定性对比模块来抑制退化提取器的固有模型不确定性。此外,可变深度动态卷积块通过动态卷积整合退化信息,并采用自适应强度缩放操作,根据网络层次自适应调整退化表示,从而促进退化信息的有效整合。定量与定性实验均证实了本方法的优越性。