Understanding illumination and reducing the need for supervision pose a significant challenge in low-light enhancement. Current approaches are highly sensitive to data usage during training and illumination-specific hyper-parameters, limiting their ability to handle unseen scenarios. In this paper, we propose a new zero-reference low-light enhancement framework trainable solely with normal light images. To accomplish this, we devise an illumination-invariant prior inspired by the theory of physical light transfer. This prior serves as the bridge between normal and low-light images. Then, we develop a prior-to-image framework trained without low-light data. During testing, this framework is able to restore our illumination-invariant prior back to images, automatically achieving low-light enhancement. Within this framework, we leverage a pretrained generative diffusion model for model ability, introduce a bypass decoder to handle detail distortion, as well as offer a lightweight version for practicality. Extensive experiments demonstrate our framework's superiority in various scenarios as well as good interpretability, robustness, and efficiency. Code is available on our project homepage: http://daooshee.github.io/QuadPrior-Website/
翻译:理解光照并减少监督需求在低光增强中构成重大挑战。现有方法对训练期间的数据使用和光照特定的超参数高度敏感,限制了其处理未见场景的能力。本文提出一种仅需正常光照图像即可训练的新型零参考低光增强框架。为此,我们基于物理光传输理论设计了一种光照不变先验,该先验作为正常图像与低光图像之间的桥梁。随后,我们构建了一个无需低光数据训练的先验到图像框架。在测试阶段,该框架能将光照不变先验恢复为图像,自动实现低光增强。在此框架中,我们利用预训练的生成扩散模型提升模型能力,引入旁路解码器处理细节失真,并提供轻量级版本以增强实用性。大量实验表明,本框架在多种场景下具有优越性,同时具备良好的可解释性、鲁棒性和效率。代码已开源至项目主页:http://daooshee.github.io/QuadPrior-Website/