Object detection in visible (RGB) and infrared (IR) images has been widely applied in recent years. Leveraging the complementary characteristics of RGB and IR images, the object detector provides reliable and robust object localization from day to night. Existing fusion strategies directly inject RGB and IR images into convolution neural networks, leading to inferior detection performance. Since the RGB and IR features have modality-specific noise, these strategies will worsen the fused features along with the propagation. Inspired by the mechanism of human brain processing multimodal information, this work introduces a new coarse-to-fine perspective to purify and fuse two modality features. Specifically, following this perspective, we design a Redundant Spectrum Removal module to coarsely remove interfering information within each modality and a Dynamic Feature Selection module to finely select the desired features for feature fusion. To verify the effectiveness of the coarse-to-fine fusion strategy, we construct a new object detector called Removal and Selection Detector (RSDet). Extensive experiments on three RGB-IR object detection datasets verify the superior performance of our method.
翻译:近年来,可见光(RGB)与红外(IR)图像中的目标检测得到了广泛应用。通过利用RGB与红外图像的互补特性,目标检测器能够实现从白天到夜间可靠且鲁棒的目标定位。现有融合策略直接将RGB与红外图像输入卷积神经网络,导致检测性能欠佳。由于RGB与红外特征具有模态特异性噪声,这类策略会随着传播过程劣化融合后的特征。受人类大脑处理多模态信息机制的启发,本研究提出一种从粗到精的新视角来净化并融合两种模态特征。具体而言,遵循该视角,我们设计了冗余频谱移除模块以粗粒度地移除各模态内的干扰信息,以及动态特征选择模块以细粒度地筛选出用于特征融合的理想特征。为验证粗到精融合策略的有效性,我们构建了名为移除与选择检测器(RSDet)的新型目标检测器。在三个RGB-IR目标检测数据集上的大量实验验证了本方法的优越性能。