Low-light enhancement has wide applications in autonomous driving, 3D reconstruction, remote sensing, surveillance, and so on, which can significantly improve information utilization. However, most existing methods lack generalization and are limited to specific tasks such as image recovery. To address these issues, we propose Gated-Mechanism Mixture-of-Experts (GM-MoE), the first framework to introduce a mixture-of-experts network for low-light image enhancement. GM-MoE comprises a dynamic gated weight conditioning network and three sub-expert networks, each specializing in a distinct enhancement task. Combining a self-designed gated mechanism that dynamically adjusts the weights of the sub-expert networks for different data domains. Additionally, we integrate local and global feature fusion within sub-expert networks to enhance image quality by capturing multi-scale features. Experimental results demonstrate that the GM-MoE achieves superior generalization with respect to 25 compared approaches, reaching state-of-the-art performance on PSNR on 5 benchmarks and SSIM on 4 benchmarks, respectively.
翻译:低光照增强在自动驾驶、三维重建、遥感监测等领域具有广泛应用,能显著提升信息利用率。然而,现有方法大多缺乏泛化能力,且局限于图像复原等特定任务。为解决这些问题,我们提出了门控机制专家混合网络(GM-MoE),这是首个将专家混合网络引入低光照图像增强的框架。GM-MoE包含动态门控权重调节网络和三个子专家网络,每个子网络专注于不同的增强任务。该框架结合了自主设计的门控机制,能够针对不同数据域动态调整子专家网络的权重。此外,我们在子专家网络中融合了局部与全局特征,通过捕获多尺度特征以提升图像质量。实验结果表明,GM-MoE相较于25种对比方法展现出更优的泛化性能,在5个基准测试的PSNR指标和4个基准测试的SSIM指标上均达到了最先进的性能水平。