The integration of miniaturized spectrometers into mobile devices offers new avenues for image quality enhancement and facilitates novel downstream tasks. However, the broader application of spectral sensors in mobile photography is hindered by the inherent complexity of spectral images and the constraints of spectral imaging capabilities. To overcome these challenges, we propose a joint RGB-Spectral decomposition model guided enhancement framework, which consists of two steps: joint decomposition and prior-guided enhancement. Firstly, we leverage the complementarity between RGB and Low-resolution Multi-Spectral Images (Lr-MSI) to predict shading, reflectance, and material semantic priors. Subsequently, these priors are seamlessly integrated into the established HDRNet to promote dynamic range enhancement, color mapping, and grid expert learning, respectively. Additionally, we construct a high-quality Mobile-Spec dataset to support our research, and our experiments validate the effectiveness of Lr-MSI in the tone enhancement task. This work aims to establish a solid foundation for advancing spectral vision in mobile photography. The code is available at \url{https://github.com/CalayZhou/JDM-HDRNet}.
翻译:微型光谱仪与移动设备的集成为图像质量提升开辟了新途径,并促进了新型下游任务的发展。然而,光谱图像固有的复杂性以及光谱成像能力的限制,阻碍了光谱传感器在移动摄影中的广泛应用。为克服这些挑战,我们提出了一种联合RGB-光谱分解模型引导的增强框架,该框架包含两个步骤:联合分解与先验引导增强。首先,我们利用RGB图像与低分辨率多光谱图像之间的互补性,预测阴影、反射率和材料语义先验。随后,将这些先验无缝集成到已建立的HDRNet中,分别促进动态范围增强、色彩映射和网格专家学习。此外,我们构建了一个高质量Mobile-Spec数据集以支持本研究,实验验证了低分辨率多光谱图像在色调增强任务中的有效性。本工作旨在为推进移动摄影中的光谱视觉技术奠定坚实基础。代码发布于 \url{https://github.com/CalayZhou/JDM-HDRNet}。