Single-image relighting is a challenging task that involves reasoning about the complex interplay between geometry, materials, and lighting. Many prior methods either support only specific categories of images, such as portraits, or require special capture conditions, like using a flashlight. Alternatively, some methods explicitly decompose a scene into intrinsic components, such as normals and BRDFs, which can be inaccurate or under-expressive. In this work, we propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer, that takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel environmental lighting condition, simply by conditioning an image generator on a target environment map, without an explicit scene decomposition. Our method builds on a pre-trained diffusion model, and fine-tunes it on a synthetic relighting dataset, revealing and harnessing the inherent understanding of lighting present in the diffusion model. We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy. Moreover, by combining with other generative methods, our model enables many downstream 2D tasks, such as text-based relighting and object insertion. Our model can also operate as a strong relighting prior for 3D tasks, such as relighting a radiance field.
翻译:单图像重光照是一项具有挑战性的任务,需要推理几何、材质与光照之间复杂的相互作用。许多现有方法要么仅支持特定类别的图像(如人像),要么需要特殊的采集条件(例如使用闪光灯)。或者,一些方法将场景显式分解为法线和双向反射分布函数(BRDF)等固有成分,这类分解可能不准确或表达能力不足。在本工作中,我们提出了一种新颖的端到端二维重光照扩散模型,称为神经灯光师(Neural Gaffer)。该模型能够输入任意物体的单张图像,仅通过以目标环境贴图作为图像生成器的条件,即可在任何新的环境光照条件下合成准确、高质量的重光照图像,而无需进行显式的场景分解。我们的方法基于预训练的扩散模型,并在合成重光照数据集上进行微调,从而揭示并利用扩散模型中固有的光照理解能力。我们在合成图像和真实网络图像上评估了我们的模型,并展示了其在泛化能力和准确性方面的优势。此外,通过与其他生成方法结合,我们的模型能够支持多种下游二维任务,例如基于文本的重光照和物体插入。我们的模型还可以作为三维任务(如辐射场重光照)的强大重光照先验。