Diffusion-based methods, endowed with a formidable generative prior, have received increasing attention in Image Super-Resolution (ISR) recently. However, as low-resolution (LR) images often undergo severe degradation, it is challenging for ISR models to perceive the semantic and degradation information, resulting in restoration images with incorrect content or unrealistic artifacts. To address these issues, we propose a \textit{Cross-modal Priors for Super-Resolution (XPSR)} framework. Within XPSR, to acquire precise and comprehensive semantic conditions for the diffusion model, cutting-edge Multimodal Large Language Models (MLLMs) are utilized. To facilitate better fusion of cross-modal priors, a \textit{Semantic-Fusion Attention} is raised. To distill semantic-preserved information instead of undesired degradations, a \textit{Degradation-Free Constraint} is attached between LR and its high-resolution (HR) counterpart. Quantitative and qualitative results show that XPSR is capable of generating high-fidelity and high-realism images across synthetic and real-world datasets. Codes are released at \url{https://github.com/qyp2000/XPSR}.
翻译:基于扩散模型的方法凭借其强大的生成先验,近年来在图像超分辨率领域受到越来越多的关注。然而,由于低分辨率图像通常存在严重的退化,超分辨率模型难以准确感知其语义信息与退化信息,导致重建图像出现内容错误或不真实的伪影。为解决这些问题,我们提出了一个\textit{跨模态先验超分辨率框架}。在XPSR框架中,为获取精确且全面的语义条件以指导扩散模型,我们利用了前沿的多模态大语言模型。为促进跨模态先验的更好融合,我们提出了\textit{语义融合注意力机制}。为提取语义保留信息而非不希望的退化特征,我们在低分辨率图像与其对应的高分辨率图像之间施加了\textit{无退化约束}。定量与定性实验结果表明,XPSR能够在合成与真实世界数据集上生成高保真度与高真实感的图像。代码已发布于 \url{https://github.com/qyp2000/XPSR}。