Low-light image super-resolution (LLSR) is a challenging task due to the coupled degradation of low resolution and poor illumination. To address this, we propose the Guided Texture and Feature Modulation Network (GTFMN), a novel framework that decouples the LLSR task into two sub-problems: illumination estimation and texture restoration. First, our network employs a dedicated Illumination Stream whose purpose is to predict a spatially varying illumination map that accurately captures lighting distribution. Further, this map is utilized as an explicit guide within our novel Illumination Guided Modulation Block (IGM Block) to dynamically modulate features in the Texture Stream. This mechanism achieves spatially adaptive restoration, enabling the network to intensify enhancement in poorly lit regions while preserving details in well-exposed areas. Extensive experiments demonstrate that GTFMN achieves the best performance among competing methods on the OmniNormal5 and OmniNormal15 datasets, outperforming them in both quantitative metrics and visual quality.
翻译:低光照图像超分辨率(LLSR)由于低分辨率和光照不足的耦合退化而成为一项具有挑战性的任务。为解决此问题,我们提出了引导纹理与特征调制网络(GTFMN),这是一个将LLSR任务解耦为两个子问题的新颖框架:光照估计与纹理恢复。首先,我们的网络采用一个专用的光照流,其目的是预测一个空间变化的光照图,以准确捕捉光照分布。此外,该图在我们新颖的引导光照调制块(IGM Block)中被用作显式引导,以动态调制纹理流中的特征。该机制实现了空间自适应的恢复,使网络能够增强光照不足区域的增强效果,同时保留曝光良好区域的细节。大量实验表明,GTFMN在OmniNormal5和OmniNormal15数据集上取得了优于竞争方法的最佳性能,在定量指标和视觉质量方面均表现更优。