Rendering diffuse global illumination in real-time is often approximated by pre-computing and storing irradiance in a 3D grid of probes. As long as most of the scene remains static, probes approximate irradiance for all surfaces immersed in the irradiance volume, including novel dynamic objects. This approach, however, suffers from aliasing artifacts and high memory consumption. We propose Neural Irradiance Volume (NIV), a neural-based technique that allows accurate real-time rendering of diffuse global illumination via a compact pre-computed model, overcoming the limitations of traditional probe-based methods, such as the expensive memory footprint, aliasing artifacts, and scene-specific heuristics. The key insight is that neural compression creates an adaptive and amortized representation of irradiance, circumventing the cubic scaling of grid-based methods. Our superior memory-scaling improves quality by at least 10x at the same memory budget, and enables a straightforward representation of higher-dimensional irradiance fields, allowing rendering of time-varying or dynamic effects without requiring additional computation at runtime. Unlike other neural rendering techniques, our method works within strict real-time constraints, providing fast inference (around 1 ms per frame on consumer GPUs at full HD resolution), reduced memory usage (1-5 MB for medium-sized scenes), and only requires a G-buffer as input, without expensive ray tracing or denoising.
翻译:实时渲染漫反射全局光照通常通过预计算辐照度并将其存储在三维探针网格中进行近似。只要场景大部分保持静态,探针即可近似计算辐照度体积内所有表面(包括新增动态物体)的辐照度。然而,该方法存在走样伪影和内存消耗过高的问题。我们提出神经辐照度体积(Neural Irradiance Volume, NIV),一种基于神经网络的渲染技术,通过紧凑的预计算模型实现精确的漫反射全局光照实时渲染,克服了传统基于探针方法的高内存占用、走样伪影和场景特定启发式算法等局限。其核心洞见在于,神经压缩创建了一种自适应且摊销的辐照度表示,规避了基于网格方法的立方级缩放。在相同内存预算下,我们优越的内存缩放特性将渲染质量提升至少10倍,并能直接表示高维辐照度场,从而无需在运行时进行额外计算即可渲染时变或动态效果。与其他神经渲染技术不同,我们的方法在严格的实时约束下运行,提供快速推理(消费级GPU上全高清分辨率下每帧约1毫秒)、降低内存使用(中等规模场景仅需1-5 MB),且仅需G-buffer作为输入,无需昂贵的光线追踪或去噪处理。