While numerous 3D reconstruction and novel-view synthesis methods allow for photorealistic rendering of a scene from multi-view images easily captured with consumer cameras, they bake illumination in their representations and fall short of supporting advanced applications like material editing, relighting, and virtual object insertion. The reconstruction of physically based material properties and lighting via inverse rendering promises to enable such applications. However, most inverse rendering techniques require high dynamic range (HDR) images as input, a setting that is inaccessible to most users. We present a method that recovers the physically based material properties and spatially-varying HDR lighting of a scene from multi-view, low-dynamic-range (LDR) images. We model the LDR image formation process in our inverse rendering pipeline and propose a novel optimization strategy for material, lighting, and a camera response model. We evaluate our approach with synthetic and real scenes compared to the state-of-the-art inverse rendering methods that take either LDR or HDR input. Our method outperforms existing methods taking LDR images as input, and allows for highly realistic relighting and object insertion.
翻译:尽管众多三维重建和新视角合成方法能够利用消费级相机轻松拍摄的多视角图像实现场景的光照真实感渲染,但这些方法将光照信息固化在表征中,难以支持材质编辑、重光照和虚拟物体插入等高级应用。通过逆渲染重建基于物理的材质属性与光照信息有望实现此类应用,然而现有逆渲染技术大多需要高动态范围图像作为输入,这令绝大多数用户难以企及。我们提出一种新方法,能够从多视角低动态范围图像中恢复场景的物理材质属性与空间变化的HDR光照信息。该方法在逆渲染管线中显式建模了LDR图像的成像过程,并提出针对材质、光照及相机响应模型的新型优化策略。通过与当前最先进的基于LDR或HDR输入的逆渲染方法在合成场景与真实场景上的对比实验,本方法在LDR图像输入场景下显著优于现有方案,并实现了高度真实感的重光照与物体插入效果。