Low-light image enhancement (LLIE) aims to improve illumination while preserving high-quality color and texture. However, existing methods often fail to extract reliable feature representations due to severely degraded pixel-level information under low-light conditions, resulting in poor texture restoration, color inconsistency, and artifact. To address these challenges, we propose LightQANet, a novel framework that introduces quantized and adaptive feature learning for low-light enhancement, aiming to achieve consistent and robust image quality across diverse lighting conditions. From the static modeling perspective, we design a Light Quantization Module (LQM) to explicitly extract and quantify illumination-related factors from image features. By enforcing structured light factor learning, LQM enhances the extraction of light-invariant representations and mitigates feature inconsistency across varying illumination levels. From the dynamic adaptation perspective, we introduce a Light-Aware Prompt Module (LAPM), which encodes illumination priors into learnable prompts to dynamically guide the feature learning process. LAPM enables the model to flexibly adapt to complex and continuously changing lighting conditions, further improving image enhancement. Extensive experiments on multiple low-light datasets demonstrate that our method achieves state-of-the-art performance, delivering superior qualitative and quantitative results across various challenging lighting scenarios.
翻译:低光照图像增强(LLIE)旨在改善光照条件,同时保持高质量的色彩与纹理。然而,由于低光照条件下像素级信息严重退化,现有方法往往难以提取可靠的特征表示,导致纹理恢复不佳、色彩不一致以及伪影产生。为应对这些挑战,我们提出了LightQANet,一种引入量化与自适应特征学习的新型低光照增强框架,旨在实现跨多样光照条件下一致且鲁棒的图像质量。从静态建模的角度,我们设计了光照量化模块(LQM),以显式地从图像特征中提取并量化与光照相关的因子。通过强制进行结构化的光照因子学习,LQM增强了对光照不变表示的提取,并缓解了不同光照水平下的特征不一致性。从动态适应的角度,我们引入了光照感知提示模块(LAPM),该模块将光照先验编码为可学习的提示,以动态引导特征学习过程。LAPM使模型能够灵活适应复杂且持续变化的光照条件,从而进一步提升图像增强效果。在多个低光照数据集上的大量实验表明,我们的方法取得了最先进的性能,在各种具有挑战性的光照场景下均提供了优异的定性与定量结果。