Low-light remote sensing images generally feature high resolution and high spatial complexity, with continuously distributed surface features in space. This continuity in scenes leads to extensive long-range correlations in spatial domains within remote sensing images. Convolutional Neural Networks, which rely on local correlations for long-distance modeling, struggle to establish long-range correlations in such images. On the other hand, transformer-based methods that focus on global information face high computational complexities when processing high-resolution remote sensing images. From another perspective, Fourier transform can compute global information without introducing a large number of parameters, enabling the network to more efficiently capture the overall image structure and establish long-range correlations. Therefore, we propose a Dual-Domain Feature Fusion Network (DFFN) for low-light remote sensing image enhancement. Specifically, this challenging task of low-light enhancement is divided into two more manageable sub-tasks: the first phase learns amplitude information to restore image brightness, and the second phase learns phase information to refine details. To facilitate information exchange between the two phases, we designed an information fusion affine block that combines data from different phases and scales. Additionally, we have constructed two dark light remote sensing datasets to address the current lack of datasets in dark light remote sensing image enhancement. Extensive evaluations show that our method outperforms existing state-of-the-art methods. The code is available at https://github.com/iijjlk/DFFN.
翻译:低频遥感图像通常具有高分辨率和高空间复杂度特征,其地表要素在空间上呈连续分布。这种场景连续性导致遥感图像空间域存在广泛的长程相关性。依赖局部相关性进行长距离建模的卷积神经网络难以建立此类图像的长程关联,而关注全局信息的Transformer方法在处理高分辨率遥感图像时面临较高计算复杂度。另一方面,傅里叶变换无需引入大量参数即可实现全局信息计算,使网络能够更高效地捕捉图像整体结构并建立长程关联。为此,我们提出用于低频遥感图像增强的双域特征融合网络(DFFN)。具体而言,我们将低频增强这一挑战性任务解耦为两个更易处理子任务:第一阶段学习振幅信息恢复图像亮度,第二阶段学习相位信息细化细节。为实现两阶段间的信息交互,我们设计了融合不同阶段与尺度数据的信息融合仿射块。此外,针对当前低频遥感图像增强数据集匮乏的问题,我们构建了两个暗光遥感数据集。大量评估表明,本方法优于现有最优方法。代码已开源至https://github.com/iijjlk/DFFN。