In the last few years, the fusion of multi-modal data has been widely studied for various applications such as robotics, gesture recognition, and autonomous navigation. Indeed, high-quality visual sensors are expensive, and consumer-grade sensors produce low-resolution images. Researchers have developed methods to combine RGB color images with non-visual data, such as thermal, to overcome this limitation to improve resolution. Fusing multiple modalities to produce visually appealing, high-resolution images often requires dense models with millions of parameters and a heavy computational load, which is commonly attributed to the intricate architecture of the model. We propose LapGSR, a multimodal, lightweight, generative model incorporating Laplacian image pyramids for guided thermal super-resolution. This approach uses a Laplacian Pyramid on RGB color images to extract vital edge information, which is then used to bypass heavy feature map computation in the higher layers of the model in tandem with a combined pixel and adversarial loss. LapGSR preserves the spatial and structural details of the image while also being efficient and compact. This results in a model with significantly fewer parameters than other SOTA models while demonstrating excellent results on two cross-domain datasets viz. ULB17-VT and VGTSR datasets.
翻译:近年来,多模态数据融合在机器人学、手势识别和自主导航等多种应用中得到广泛研究。事实上,高质量的视觉传感器价格昂贵,而消费级传感器只能生成低分辨率图像。为克服这一限制以提高分辨率,研究人员已开发出将RGB彩色图像与热成像等非视觉数据相结合的方法。融合多种模态以生成视觉上吸引人的高分辨率图像,通常需要具有数百万参数的密集模型并带来沉重的计算负荷,这通常归因于模型复杂的架构。我们提出了LapGSR,这是一种多模态、轻量级的生成模型,它结合了拉普拉斯图像金字塔用于引导热图像超分辨率。该方法在RGB彩色图像上使用拉普拉斯金字塔提取关键的边缘信息,随后利用这些信息,结合像素损失与对抗损失,绕过模型高层中繁重的特征图计算。LapGSR在保持图像空间和结构细节的同时,兼具高效性与紧凑性。这产生了一个参数量显著少于其他SOTA模型的模型,并在两个跨域数据集(即ULB17-VT和VGTSR数据集)上展示了优异的结果。