Multi-modal image fusion (MMIF) enhances the information content of the fused image by combining the unique as well as common features obtained from different modality sensor images, improving visualization, object detection, and many more tasks. In this work, we introduce an interpretable network for the MMIF task, named FNet, based on an l0-regularized multi-modal convolutional sparse coding (MCSC) model. Specifically, for solving the l0-regularized CSC problem, we develop an algorithm unrolling-based l0-regularized sparse coding (LZSC) block. Given different modality source images, FNet first separates the unique and common features from them using the LZSC block and then these features are combined to generate the final fused image. Additionally, we propose an l0-regularized MCSC model for the inverse fusion process. Based on this model, we introduce an interpretable inverse fusion network named IFNet, which is utilized during FNet's training. Extensive experiments show that FNet achieves high-quality fusion results across five different MMIF tasks. Furthermore, we show that FNet enhances downstream object detection in visible-thermal image pairs. We have also visualized the intermediate results of FNet, which demonstrates the good interpretability of our network.
翻译:多模态图像融合(MMIF)通过结合从不同模态传感器图像中获得的独特及共有特征,增强融合图像的信息含量,从而改善可视化、目标检测等多项任务。本文提出一种基于l0正则化多模态卷积稀疏编码(MCSC)模型的可解释MMIF网络,命名为FNet。具体而言,为求解l0正则化卷积稀疏编码问题,我们开发了基于算法展开的l0正则化稀疏编码(LZSC)模块。给定不同模态的源图像,FNet首先通过LZSC模块分离其独特特征与共有特征,随后融合这些特征以生成最终融合图像。此外,我们针对逆向融合过程提出了l0正则化MCSC模型,并基于该模型构建了可解释逆向融合网络IFNet,该网络在FNet训练阶段被调用。大量实验表明,FNet在五种不同的MMIF任务中均能实现高质量融合结果。进一步研究证明,FNet能有效提升可见光-热红外图像对的下游目标检测性能。通过对FNet中间结果的可视化分析,验证了本网络具有良好的可解释性。