Remote Sensing Image Super-Resolution (RSISR) reconstructs high-resolution (HR) remote sensing images from low-resolution inputs to support fine-grained ground object interpretation. Existing methods face three key challenges: (1) Difficulty in extracting multi-scale features from spatially heterogeneous RS scenes, (2) Limited prior information causing semantic inconsistency in reconstructions, and (3) Trade-off imbalance between geometric accuracy and visual quality. To address these issues, we propose the Texture Transfer Residual Denoising Dual Diffusion Model (TTRD3) with three innovations: First, a Multi-scale Feature Aggregation Block (MFAB) employing parallel heterogeneous convolutional kernels for multi-scale feature extraction. Second, a Sparse Texture Transfer Guidance (STTG) module that transfers HR texture priors from reference images of similar scenes. Third, a Residual Denoising Dual Diffusion Model (RDDM) framework combining residual diffusion for deterministic reconstruction and noise diffusion for diverse generation. Experiments on multi-source RS datasets demonstrate TTRD3's superiority over state-of-the-art methods, achieving 1.43% LPIPS improvement and 3.67% FID enhancement compared to best-performing baselines. Code/model: https://github.com/LED-666/TTRD3.
翻译:遥感图像超分辨率旨在从低分辨率输入中重建高分辨率遥感图像,以支持细粒度地物解译。现有方法面临三个关键挑战:(1) 难以从空间异质的遥感场景中提取多尺度特征;(2) 先验信息有限导致重建结果语义不一致;(3) 几何精度与视觉质量之间的权衡失衡。为解决这些问题,我们提出了纹理迁移残差去噪双重扩散模型,其包含三项创新:首先,采用多尺度特征聚合块,利用并行异构卷积核进行多尺度特征提取。其次,设计了稀疏纹理迁移引导模块,从相似场景的参考图像中迁移高分辨率纹理先验。第三,构建了残差去噪双重扩散模型框架,结合确定性重建的残差扩散与多样性生成的噪声扩散。在多源遥感数据集上的实验表明,TTRD3 优于现有最先进方法,相比性能最佳的基线模型,LPIPS 指标提升 1.43%,FID 指标改善 3.67%。代码/模型:https://github.com/LED-666/TTRD3。