We present GI-GS, a novel inverse rendering framework that leverages 3D Gaussian Splatting (3DGS) and deferred shading to achieve photo-realistic novel view synthesis and relighting. In inverse rendering, accurately modeling the shading processes of objects is essential for achieving high-fidelity results. Therefore, it is critical to incorporate global illumination to account for indirect lighting that reaches an object after multiple bounces across the scene. Previous 3DGS-based methods have attempted to model indirect lighting by characterizing indirect illumination as learnable lighting volumes or additional attributes of each Gaussian, while using baked occlusion to represent shadow effects. These methods, however, fail to accurately model the complex physical interactions between light and objects, making it impossible to construct realistic indirect illumination during relighting. To address this limitation, we propose to calculate indirect lighting using efficient path tracing with deferred shading. In our framework, we first render a G-buffer to capture the detailed geometry and material properties of the scene. Then, we perform physically-based rendering (PBR) only for direct lighting. With the G-buffer and previous rendering results, the indirect lighting can be calculated through a lightweight path tracing. Our method effectively models indirect lighting under any given lighting conditions, thereby achieving better novel view synthesis and competitive relighting. Quantitative and qualitative results show that our GI-GS outperforms existing baselines in both rendering quality and efficiency.
翻译:我们提出了GI-GS,一种新颖的逆向渲染框架,它利用3D高斯泼溅(3DGS)与延迟着色技术,实现了照片级真实感的新视角合成与重光照。在逆向渲染中,精确建模物体的着色过程对于获得高保真结果至关重要。因此,必须纳入全局光照以考虑光线在场景中经过多次反弹后到达物体的间接照明。先前基于3DGS的方法试图通过将间接光照表征为可学习的照明体积或每个高斯的附加属性来建模间接照明,同时使用预计算的遮挡来表示阴影效果。然而,这些方法未能准确建模光与物体之间复杂的物理交互,导致在重光照时无法构建真实的间接照明。为解决这一局限,我们提出使用高效的路径追踪结合延迟着色来计算间接光照。在我们的框架中,我们首先渲染一个G-buffer以捕获场景的详细几何与材质属性。随后,我们仅对直接光照执行基于物理的渲染(PBR)。借助G-buffer与先前的渲染结果,间接光照可通过轻量级的路径追踪计算得出。我们的方法能有效建模任意给定光照条件下的间接照明,从而实现更优的新视角合成与具有竞争力的重光照效果。定量与定性结果表明,我们的GI-GS在渲染质量与效率上均优于现有基线方法。