Recent progress in remote sensing image (RSI) super-resolution (SR) has exhibited remarkable performance using deep neural networks, e.g., Convolutional Neural Networks and Transformers. However, existing SR methods often suffer from either a limited receptive field or quadratic computational overhead, resulting in sub-optimal global representation and unacceptable computational costs in large-scale RSI. To alleviate these issues, we develop the first attempt to integrate the Vision State Space Model (Mamba) for RSI-SR, which specializes in processing large-scale RSI by capturing long-range dependency with linear complexity. To achieve better SR reconstruction, building upon Mamba, we devise a Frequency-assisted Mamba framework, dubbed FMSR, to explore the spatial and frequent correlations. In particular, our FMSR features a multi-level fusion architecture equipped with the Frequency Selection Module (FSM), Vision State Space Module (VSSM), and Hybrid Gate Module (HGM) to grasp their merits for effective spatial-frequency fusion. Recognizing that global and local dependencies are complementary and both beneficial for SR, we further recalibrate these multi-level features for accurate feature fusion via learnable scaling adaptors. Extensive experiments on AID, DOTA, and DIOR benchmarks demonstrate that our FMSR outperforms state-of-the-art Transformer-based methods HAT-L in terms of PSNR by 0.11 dB on average, while consuming only 28.05% and 19.08% of its memory consumption and complexity, respectively.
翻译:遥感图像超分辨率(SR)的最新进展利用深度神经网络(例如卷积神经网络和Transformer)取得了显著性能。然而,现有SR方法常受限于有限的感受野或二次计算开销,导致全局表示次优,且在大规模RSI中产生不可接受的计算成本。为缓解这些问题,我们首次尝试集成视觉状态空间模型(Mamba)用于RSI-SR,该模型通过线性复杂度捕获长程依赖关系,专门处理大规模RSI。为实现更好的SR重建,基于Mamba,我们设计了一种频率辅助的Mamba框架(命名为FMSR),以探索空间和频率相关性。特别地,我们的FMSR采用多级融合架构,配备了频率选择模块(FSM)、视觉状态空间模块(VSSM)和混合门控模块(HGM),以发挥各自优势实现有效的空间-频率融合。鉴于全局和局部依赖关系具有互补性且均有益于SR,我们进一步通过可学习缩放适配器对这些多级特征进行重校准,以实现精确的特征融合。在AID、DOTA和DIOR基准上的大量实验表明,我们的FMSR在PSNR指标上平均超出最先进的基于Transformer的方法HAT-L达0.11 dB,同时其内存消耗和复杂度仅分别为后者的28.05%和19.08%。