Single-image relighting is a challenging task that involves reasoning about the complex interplay between geometry, materials, and lighting. Many prior methods either support only specific categories of images, such as portraits, or require special capture conditions, like using a flashlight. Alternatively, some methods explicitly decompose a scene into intrinsic components, such as normals and BRDFs, which can be inaccurate or under-expressive. In this work, we propose a novel end-to-end 2D relighting diffusion model, called Neural Gaffer, that takes a single image of any object and can synthesize an accurate, high-quality relit image under any novel environmental lighting condition, simply by conditioning an image generator on a target environment map, without an explicit scene decomposition. Our method builds on a pre-trained diffusion model, and fine-tunes it on a synthetic relighting dataset, revealing and harnessing the inherent understanding of lighting present in the diffusion model. We evaluate our model on both synthetic and in-the-wild Internet imagery and demonstrate its advantages in terms of generalization and accuracy. Moreover, by combining with other generative methods, our model enables many downstream 2D tasks, such as text-based relighting and object insertion. Our model can also operate as a strong relighting prior for 3D tasks, such as relighting a radiance field.
翻译:单图像重光照是一项具有挑战性的任务,需要推理几何、材质与光照之间复杂的相互作用。许多现有方法要么仅支持特定类别的图像(如人像),要么需要特殊的捕获条件(例如使用手电筒)。另一些方法则显式地将场景分解为法线和双向反射分布函数(BRDF)等固有成分,但这些分解可能不准确或表达能力不足。在本工作中,我们提出了一种新颖的端到端二维重光照扩散模型,称为 Neural Gaffer。该模型能够接收任意物体的单张图像,仅需以目标环境贴图作为图像生成器的条件,即可在任何新的环境光照条件下合成准确且高质量的重光照图像,而无需进行显式的场景分解。我们的方法基于预训练的扩散模型,并在合成重光照数据集上进行微调,从而揭示并利用了扩散模型中固有的光照理解能力。我们在合成数据与真实网络图像上评估了模型,并展示了其在泛化性与准确性方面的优势。此外,通过与其他生成方法结合,我们的模型能够支持多种下游二维任务,例如基于文本的重光照和物体插入。该模型也可作为三维任务的强重光照先验,例如对辐射场进行重光照。