Manipulating the illumination of a 3D scene within a single image represents a fundamental challenge in computer vision and graphics. This problem has traditionally been addressed using inverse rendering techniques, which involve explicit 3D asset reconstruction and costly ray-tracing simulations. Meanwhile, recent advancements in visual foundation models suggest that a new paradigm could soon be possible -- one that replaces explicit physical models with networks that are trained on large amounts of image and video data. In this paper, we exploit the implicit scene understanding of a video diffusion model, particularly Stable Video Diffusion, to relight a single image. We introduce GenLit, a framework that distills the ability of a graphics engine to perform light manipulation into a video-generation model, enabling users to directly insert and manipulate a point light in the 3D world within a given image and generate results directly as a video sequence. We find that a model fine-tuned on only a small synthetic dataset generalizes to real-world scenes, enabling single-image relighting with plausible and convincing shadows and inter-reflections. Our results highlight the ability of video foundation models to capture rich information about lighting, material, and shape, and our findings indicate that such models, with minimal training, can be used to perform relighting without explicit asset reconstruction or ray-tracing. . Project page: https://genlit.is.tue.mpg.de/.
翻译:在单幅图像中操控三维场景的照明是计算机视觉与图形学中的一个基础性挑战。传统上,该问题通过逆向渲染技术解决,这涉及显式的三维资产重建和昂贵的射线追踪模拟。与此同时,视觉基础模型的最新进展表明,一种新的范式可能即将成为可能——即用在大规模图像和视频数据上训练的网络取代显式的物理模型。在本文中,我们利用视频扩散模型(特别是 Stable Video Diffusion)的隐式场景理解能力,对单幅图像进行重光照。我们提出了 GenLit,这是一个将图形引擎执行光照操控的能力蒸馏到视频生成模型中的框架,使用户能够在给定图像的三维世界中直接插入并操控点光源,并直接生成视频序列作为结果。我们发现,仅在小型合成数据集上微调的模型能够泛化到真实世界场景,实现具有合理且可信的阴影和相互反射的单图像重光照。我们的结果突显了视频基础模型在捕捉丰富的光照、材质和形状信息方面的能力,并且我们的发现表明,此类模型只需极少的训练,即可用于执行重光照,而无需显式的资产重建或射线追踪。项目页面:https://genlit.is.tue.mpg.de/。