Reconstructing dynamic 3D scenes from monocular input is fundamentally under-constrained, with ambiguities arising from occlusion and extreme novel views. While dynamic Gaussian Splatting offers an efficient representation, vanilla models optimize all Gaussian primitives uniformly, ignoring whether they are well or poorly observed. This limitation leads to motion drifts under occlusion and degraded synthesis when extrapolating to unseen views. We argue that uncertainty matters: Gaussians with recurring observations across views and time act as reliable anchors to guide motion, whereas those with limited visibility are treated as less reliable. To this end, we introduce USplat4D, a novel Uncertainty-aware dynamic Gaussian Splatting framework that propagates reliable motion cues to enhance 4D reconstruction. Our key insight is to estimate time-varying per-Gaussian uncertainty and leverages it to construct a spatio-temporal graph for uncertainty-aware optimization. Experiments on diverse real and synthetic datasets show that explicitly modeling uncertainty consistently improves dynamic Gaussian Splatting models, yielding more stable geometry under occlusion and high-quality synthesis at extreme viewpoints.
翻译:从单目输入重建动态三维场景本质上是欠约束的,遮挡和极端新视角会带来模糊性。虽然动态高斯溅射提供了一种高效的表示方法,但原始模型对所有高斯基元进行统一优化,忽略了它们是否被充分观测。这一局限导致遮挡下的运动漂移以及外推至未见视角时合成质量下降。我们认为不确定性至关重要:在视角和时间上反复出现观测的高斯点可作为引导运动的可靠锚点,而那些可见性有限的点则被视为可靠性较低。为此,我们提出了USplat4D,一种新颖的感知不确定性的动态高斯溅射框架,通过传播可靠的运动线索来增强4D重建。我们的核心见解是估计随时间变化的逐高斯不确定性,并利用其构建时空图以进行不确定性感知优化。在多样化的真实与合成数据集上的实验表明,显式建模不确定性能够持续改进动态高斯溅射模型,在遮挡下产生更稳定的几何结构,并在极端视角下实现高质量的合成。