In the field of low-light image enhancement, both traditional Retinex methods and advanced deep learning techniques such as Retinexformer have shown distinct advantages and limitations. Traditional Retinex methods, designed to mimic the human eye's perception of brightness and color, decompose images into illumination and reflection components but struggle with noise management and detail preservation under low light conditions. Retinexformer enhances illumination estimation through traditional self-attention mechanisms, but faces challenges with insufficient interpretability and suboptimal enhancement effects. To overcome these limitations, this paper introduces the RetinexMamba architecture. RetinexMamba not only captures the physical intuitiveness of traditional Retinex methods but also integrates the deep learning framework of Retinexformer, leveraging the computational efficiency of State Space Models (SSMs) to enhance processing speed. This architecture features innovative illumination estimators and damage restorer mechanisms that maintain image quality during enhancement. Moreover, RetinexMamba replaces the IG-MSA (Illumination-Guided Multi-Head Attention) in Retinexformer with a Fused-Attention mechanism, improving the model's interpretability. Experimental evaluations on the LOL dataset show that RetinexMamba outperforms existing deep learning approaches based on Retinex theory in both quantitative and qualitative metrics, confirming its effectiveness and superiority in enhancing low-light images.
翻译:在低光照图像增强领域,传统Retinex方法与Retinexformer等先进深度学习技术各具优势与局限。传统Retinex方法通过模拟人眼对亮度与色彩的感知机制,将图像分解为光照分量与反射分量,但在低光照条件下存在噪声抑制不足与细节保留困难的问题。Retinexformer借助传统自注意力机制优化光照估计,却面临可解释性不足与增强效果欠佳的挑战。为解决上述问题,本文提出RetinexMamba架构。该架构不仅保留了传统Retinex方法的物理直观性,还融合了Retinexformer的深度学习框架,并利用状态空间模型的高计算效率提升处理速度。其创新性地引入光照估计器与损伤修复器机制,在增强过程中有效维持图像质量。此外,RetinexMamba将Retinexformer中的光照引导多头注意力替换为融合注意力机制,增强了模型可解释性。在LOL数据集上的实验评估表明,RetinexMamba在定量与定性指标上均优于现有基于Retinex理论的深度学习方法,验证了其在低光照图像增强中的有效性与优越性。