When viewing a 3D Gaussian Splatting (3DGS) model from camera positions significantly outside the training data distribution, substantial visual noise commonly occurs. These artifacts result from the lack of training data in these extrapolated regions, leading to uncertain density, color, and geometry predictions from the model. To address this issue, we propose a novel real-time render-aware filtering method. Our approach leverages sensitivity scores derived from intermediate gradients, explicitly targeting instabilities caused by anisotropic orientations rather than isotropic variance. This filtering method directly addresses the core issue of generative uncertainty, allowing 3D reconstruction systems to maintain high visual fidelity even when users freely navigate outside the original training viewpoints. Experimental evaluation demonstrates that our method substantially improves visual quality, realism, and consistency compared to existing Neural Radiance Field (NeRF)-based approaches such as BayesRays. Critically, our filter seamlessly integrates into existing 3DGS rendering pipelines in real-time, unlike methods that require extensive post-hoc retraining or fine-tuning. Code and results at https://damian-bowness.github.io/EV3DGS
翻译:当从显著超出训练数据分布的相机位置观察3D高斯泼溅(3DGS)模型时,通常会出现严重的视觉噪声。这些伪影源于外推区域训练数据的缺失,导致模型对密度、颜色和几何形状的预测存在不确定性。为解决此问题,我们提出了一种新颖的实时渲染感知滤波方法。该方法利用从中间梯度导出的敏感度分数,明确针对由各向异性方向(而非各向同性方差)引起的不稳定性。这种滤波方法直接解决了生成不确定性的核心问题,使得3D重建系统即使用户在原始训练视点之外自由导航时,仍能保持高视觉保真度。实验评估表明,与现有基于神经辐射场(NeRF)的方法(如BayesRays)相比,我们的方法在视觉质量、真实感和一致性方面均有显著提升。关键的是,我们的滤波器能够实时无缝集成到现有3DGS渲染管线中,而无需像其他方法那样进行大量的事后重新训练或微调。代码与结果详见 https://damian-bowness.github.io/EV3DGS