We present a method for relighting 3D reconstructions of large room-scale environments. Existing solutions for 3D scene relighting often require solving under-determined or ill-conditioned inverse rendering problems, and are as such unable to produce high-quality results on complex real-world scenes. Though recent progress in using generative image and video diffusion models for relighting has been promising, these techniques are either limited to 2D image and video relighting or 3D relighting of individual objects. Our approach enables controllable 3D relighting of room-scale scenes by distilling the outputs of a video-to-video relighting diffusion model into a 3D reconstruction. This side-steps the need to solve a difficult inverse rendering problem, and results in a flexible system that can relight 3D reconstructions of complex real-world scenes. We validate our approach on both synthetic and real-world datasets to show that it can faithfully render novel views of scenes under new lighting conditions.
翻译:我们提出了一种对大规模房间尺度三维重建场景进行重光照的方法。现有的三维场景重光照方案通常需要求解欠定或病态的逆向渲染问题,因此难以在复杂的真实场景中生成高质量结果。尽管近期利用生成式图像与视频扩散模型进行重光照的研究进展显著,但这些技术仍局限于二维图像/视频重光照或单一物体的三维重光照。我们的方法通过将视频到视频重光照扩散模型的输出提炼至三维重建中,实现了房间尺度场景的可控三维重光照。这一方案规避了求解复杂逆向渲染问题的需求,构建了一个能够对复杂真实场景三维重建进行重光照的灵活系统。我们在合成与真实数据集上验证了本方法的有效性,结果表明其能够在新光照条件下准确渲染场景的新视角。