Reconstructing an object from photos and placing it virtually in a new environment goes beyond the standard novel view synthesis task as the appearance of the object has to not only adapt to the novel viewpoint but also to the new lighting conditions and yet evaluations of inverse rendering methods rely on novel view synthesis data or simplistic synthetic datasets for quantitative analysis. This work presents a real-world dataset for measuring the reconstruction and rendering of objects for relighting. To this end, we capture the environment lighting and ground truth images of the same objects in multiple environments allowing to reconstruct the objects from images taken in one environment and quantify the quality of the rendered views for the unseen lighting environments. Further, we introduce a simple baseline composed of off-the-shelf methods and test several state-of-the-art methods on the relighting task and show that novel view synthesis is not a reliable proxy to measure performance. Code and dataset are available at https://github.com/isl-org/objects-with-lighting .
翻译:从照片中重建物体并将其虚拟放置在新环境中,超越了标准的新视角合成任务——物体的外观不仅需要适应新视角,还需适应新光照条件。然而,逆渲染方法的定量评估仍依赖新视角合成数据或简化的合成数据集。本工作提出了一个用于衡量物体重光照重建与渲染质量的真实世界数据集。为此,我们在多种环境中捕获同一物体的环境光照与真实图像,从而支持从单环境图像中重建物体,并量化不可见光照环境下渲染视图的质量。此外,我们引入了一个由现成方法组成的简单基线,在重光照任务上测试了多种最新方法,结果表明新视角合成并非衡量性能的可靠代理。代码与数据集见 https://github.com/isl-org/objects-with-lighting。