An adequate fusion of the most significant salient information from multiple input channels is essential for many aerial imaging tasks. While multispectral recordings reveal features in various spectral ranges, synthetic aperture sensing makes occluded features visible. We present a first and hybrid (model- and learning-based) architecture for fusing the most significant features from conventional aerial images with the ones from integral aerial images that are the result of synthetic aperture sensing for removing occlusion. It combines the environment's spatial references with features of unoccluded targets that would normally be hidden by dense vegetation. Our method outperforms state-of-the-art two-channel and multi-channel fusion approaches visually and quantitatively in common metrics, such as mutual information, visual information fidelity, and peak signal-to-noise ratio. The proposed model does not require manually tuned parameters, can be extended to an arbitrary number and arbitrary combinations of spectral channels, and is reconfigurable for addressing different use cases. We demonstrate examples for search and rescue, wildfire detection, and wildlife observation.
翻译:多通道输入中显著信息的充分融合对许多航拍成像任务至关重要。多光谱记录可揭示不同光谱范围内的特征,而合成孔径传感技术则能使被遮挡的特征变得可见。我们首次提出了一种融合传统航拍图像显著特征与积分航拍图像(通过合成孔径传感消除遮挡后获取)特征的混合架构(融合模型与学习方法)。该架构将环境空间参考与通常被茂密植被隐藏的无遮挡目标特征相结合。在互信息、视觉信息保真度和峰值信噪比等通用指标上,我们的方法在视觉和定量层面均优于现有最优的双通道与多通道融合方法。所提模型无需手动调节参数,可扩展至任意数量及任意组合的光谱通道,并能针对不同应用场景进行重构。我们展示了该方法在搜救、野火探测和野生动物观测中的应用实例。