We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views, driven by a generative relighting model that simulates the effects of placing the object under novel environment illuminations. Our method samples the appearance of the object under multiple lighting environments, creating a dataset that is used to train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting. The lighting-conditioned NeRF uses a novel dual-branch architecture to encode the general lighting effects and specularities separately. The optimized lighting-conditioned NeRF enables efficient feed-forward relighting under arbitrary environment maps without requiring per-illumination optimization or light transport simulation. We evaluate our approach on the established TensoIR and Stanford-ORB datasets, where it improves upon the state-of-the-art on most metrics, and showcase our approach on real-world object captures.
翻译:我们提出ROGR,一种新颖的方法,通过生成式重照明模型驱动,从多视角捕获的物体重建出可重照明的三维模型,该模型能模拟物体置于新环境光照下的效果。我们的方法在多种光照环境下采样物体的外观,构建一个数据集,用于训练一个光照条件化的神经辐射场(NeRF),该网络能输出物体在任意输入环境光照下的外观。光照条件化NeRF采用一种新颖的双分支架构,分别编码全局光照效果和镜面反射。优化后的光照条件化NeRF能够在任意环境贴图下实现高效的前馈式重照明,无需针对每种光照进行优化或进行光传输模拟。我们在成熟的TensoIR和Stanford-ORB数据集上评估了我们的方法,其在多数指标上优于现有最先进技术,并在真实世界物体捕获案例中展示了我们的方法。