Recovering High Dynamic Range (HDR) images from multiple Standard Dynamic Range (SDR) images become challenging when the SDR images exhibit noticeable degradation and missing content. Leveraging scene-specific semantic priors offers a promising solution for restoring heavily degraded regions. However, these priors are typically extracted from sRGB SDR images, the domain/format gap poses a significant challenge when applying it to HDR imaging. To address this issue, we propose a general framework that transfers semantic knowledge derived from SDR domain via self-distillation to boost existing HDR reconstruction. Specifically, the proposed framework first introduces the Semantic Priors Guided Reconstruction Model (SPGRM), which leverages SDR image semantic knowledge to address ill-posed problems in the initial HDR reconstruction results. Subsequently, we leverage a self-distillation mechanism that constrains the color and content information with semantic knowledge, aligning the external outputs between the baseline and SPGRM. Furthermore, to transfer the semantic knowledge of the internal features, we utilize a Semantic Knowledge Alignment Module (SKAM) to fill the missing semantic contents with the complementary masks. Extensive experiments demonstrate that our framework significantly boosts HDR imaging quality for existing methods without altering the network architecture.
翻译:当标准动态范围图像存在明显退化与内容缺失时,从多张标准动态范围图像重建高动态范围图像面临严峻挑战。利用场景特定的语义先验为严重退化区域的恢复提供了可行方案。然而,这些先验通常从sRGB标准动态范围图像中提取,其域/格式差异对高动态范围成像的应用构成了显著障碍。为解决该问题,我们提出一种通用框架,通过自蒸馏机制迁移来自标准动态范围域的语义知识,以增强现有高动态范围重建方法。具体而言,所提框架首先引入语义先验引导重建模型,该模型利用标准动态范围图像的语义知识处理初始高动态范围重建结果中的不适定问题。随后,我们采用自蒸馏机制,通过语义知识约束色彩与内容信息,使基线模型与语义先验引导重建模型的输出保持对齐。此外,为迁移内部特征的语义知识,我们设计语义知识对齐模块,通过互补掩码填补缺失的语义内容。大量实验表明,在不改变网络架构的前提下,本框架能显著提升现有方法的高动态范围成像质量。