Underwater Image Enhancement (UIE) is an ill-posed problem where natural clean references are not available, and the degradation levels vary significantly across semantic regions. Existing UIE methods treat images with a single global model and ignore the inconsistent degradation of different scene components. This oversight leads to significant color distortions and loss of fine details in heterogeneous underwater scenes, especially where degradation varies significantly across different image regions. Therefore, we propose SUCode (Semantic-aware Underwater Codebook Network), which achieves adaptive UIE from semantic-aware discrete codebook representation. Compared with one-shot codebook-based methods, SUCode exploits semantic-aware, pixel-level codebook representation tailored to heterogeneous underwater degradation. A three-stage training paradigm is employed to represent raw underwater image features to avoid pseudo ground-truth contamination. Gated Channel Attention Module (GCAM) and Frequency-Aware Feature Fusion (FAFF) jointly integrate channel and frequency cues for faithful color restoration and texture recovery. Extensive experiments on multiple benchmarks demonstrate that SUCode achieves state-of-the-art performance, outperforming recent UIE methods on both reference and no-reference metrics. The code will be made public available at https://github.com/oucailab/SUCode.
翻译:水下图像增强是一个不适定问题,其缺乏自然的清晰参考图像,且退化程度在不同语义区域间差异显著。现有的水下图像增强方法采用单一全局模型处理图像,忽略了不同场景成分的不一致退化。这种忽视导致在异质水下场景中,尤其是在不同图像区域退化程度差异显著的情况下,出现严重的色彩失真和细节丢失。因此,我们提出了SUCode(语义感知水下码本网络),它通过语义感知的离散码本表示实现自适应水下图像增强。与基于一次性码本的方法相比,SUCode利用了针对异质水下退化定制的语义感知、像素级码本表示。我们采用三阶段训练范式来表示原始水下图像特征,以避免伪真值污染。门控通道注意力模块和频率感知特征融合模块共同整合通道与频率线索,以实现准确的色彩恢复与纹理复原。在多个基准数据集上的大量实验表明,SUCode取得了最先进的性能,在参考和无参考指标上均优于近期的主流水下图像增强方法。代码将在 https://github.com/oucailab/SUCode 公开。