Object detection in visible (RGB) and infrared (IR) images has been widely applied in recent years. Leveraging the complementary characteristics of RGB and IR images, the object detector provides reliable and robust object localization from day to night. Existing fusion strategies directly inject RGB and IR images into convolution neural networks, leading to inferior detection performance. Since the RGB and IR features have modality-specific noise, these strategies will worsen the fused features along with the propagation. Inspired by the mechanism of human brain processing multimodal information, this work introduces a new coarse-to-fine perspective to purify and fuse two modality features. Specifically, following this perspective, we design a Redundant Spectrum Removal module to coarsely remove interfering information within each modality and a Dynamic Feature Selection module to finely select the desired features for feature fusion. To verify the effectiveness of the coarse-to-fine fusion strategy, we construct a new object detector called Removal and Selection Detector (RSDet). Extensive experiments on three RGB-IR object detection datasets verify the superior performance of our method.
翻译:近年来,可见光(RGB)与红外(IR)图像的目标检测技术已得到广泛应用。通过利用RGB与IR图像的互补特性,该类检测器可实现从白天到夜间全天候的可靠鲁棒目标定位。现有融合策略直接将RGB与IR图像输入卷积神经网络,导致检测性能欠佳。由于RGB与IR特征存在模态特异性噪声,这些策略会在特征传播过程中恶化融合特征。受人类大脑处理多模态信息机制的启发,本文提出了一种全新的粗到精视角来净化并融合两种模态特征。具体而言,基于该视角,我们设计了冗余频谱去除模块以粗粒度消除各模态内的干扰信息,以及动态特征选择模块以细粒度选取目标特征进行融合。为验证该粗到精融合策略的有效性,我们构建了名为"去除与选择检测器"(RSDet)的新型目标检测器。在三个RGB-IR目标检测数据集上的大量实验验证了本方法的优越性能。