A common practice in deep learning involves training large neural networks on massive datasets to achieve high accuracy across various domains and tasks. While this approach works well in many application areas, it often fails drastically when processing data from a new modality with a significant distribution shift from the data used to pre-train the model. This paper focuses on adapting a large object detection model trained on RGB images to new data extracted from IR images with a substantial modality shift. We propose Modality Translator (ModTr) as an alternative to the common approach of fine-tuning a large model to the new modality. ModTr adapts the IR input image with a small transformation network trained to directly minimize the detection loss. The original RGB model can then work on the translated inputs without any further changes or fine-tuning to its parameters. Experimental results on translating from IR to RGB images on two well-known datasets show that our simple approach provides detectors that perform comparably or better than standard fine-tuning, without forgetting the knowledge of the original model. This opens the door to a more flexible and efficient service-based detection pipeline, where a unique and unaltered server, such as an RGB detector, runs constantly while being queried by different modalities, such as IR with the corresponding translations model. Our code is available at: https://github.com/heitorrapela/ModTr.
翻译:深度学习的常见实践是在海量数据集上训练大型神经网络,以期在不同领域和任务中实现高精度。尽管该方法在许多应用领域表现良好,但当处理来自新模态的数据且该数据与模型预训练所用数据存在显著分布偏移时,其性能往往急剧下降。本文聚焦于将基于RGB图像训练的大型目标检测模型,适配到从红外图像中提取的、存在显著模态偏移的新数据。我们提出模态转换器作为常见微调方法的替代方案,该小型转换网络通过直接最小化检测损失来适配红外输入图像。原始RGB模型随后可直接处理转换后的输入,无需对其参数进行任何修改或微调。在两个知名数据集上进行的红外至RGB图像转换实验表明,这种简洁方法所获得的检测器性能与标准微调相当或更优,且不会遗忘原始模型的知识。这为实现更灵活高效的基于服务的检测流程开辟了新途径:单一且未经修改的服务器(如RGB检测器)可持续运行,同时接受来自不同模态(如配备对应转换模型的红外数据)的查询。我们的代码公开于:https://github.com/heitorrapela/ModTr。