Multi-modality image fusion (MMIF) aims to integrate complementary information from different modalities into a single fused image to represent the imaging scene and facilitate downstream visual tasks comprehensively. In recent years, significant progress has been made in MMIF tasks due to advances in deep neural networks. However, existing methods cannot effectively and efficiently extract modality-specific and modality-fused features constrained by the inherent local reductive bias (CNN) or quadratic computational complexity (Transformers). To overcome this issue, we propose a Mamba-based Dual-phase Fusion (MambaDFuse) model. Firstly, a dual-level feature extractor is designed to capture long-range features from single-modality images by extracting low and high-level features from CNN and Mamba blocks. Then, a dual-phase feature fusion module is proposed to obtain fusion features that combine complementary information from different modalities. It uses the channel exchange method for shallow fusion and the enhanced Multi-modal Mamba (M3) blocks for deep fusion. Finally, the fused image reconstruction module utilizes the inverse transformation of the feature extraction to generate the fused result. Through extensive experiments, our approach achieves promising fusion results in infrared-visible image fusion and medical image fusion. Additionally, in a unified benchmark, MambaDFuse has also demonstrated improved performance in downstream tasks such as object detection. Code with checkpoints will be available after the peer-review process.
翻译:多模态图像融合旨在将来自不同模态的互补信息整合到单一融合图像中,以全面表征成像场景并促进下游视觉任务。近年来,得益于深度神经网络的进步,多模态图像融合任务取得了显著进展。然而,现有方法受限于卷积神经网络的固有局部缩减偏置或Transformer的二次计算复杂度,无法有效且高效地提取模态特定与模态融合特征。为解决此问题,我们提出了一种基于Mamba的双阶段融合模型(MambaDFuse)。首先,设计了一个双级特征提取器,通过从CNN和Mamba模块中提取低层与高层特征来捕获单模态图像的长程特征。随后,提出了一种双阶段特征融合模块,通过融合不同模态的互补信息获取融合特征:该模块采用通道交换方法进行浅层融合,并利用增强型多模态Mamba模块进行深层融合。最后,融合图像重建模块利用特征提取的逆变换生成融合结果。通过大量实验,本方法在红外-可见光图像融合与医学图像融合中取得了优异的融合效果。此外,在统一基准测试中,MambaDFuse在目标检测等下游任务中也展现出性能提升。论文评审结束后将公开包含模型检查点的代码。