Reconstructing dynamic 3D scenes from monocular input is fundamentally under-constrained, with ambiguities arising from occlusion and extreme novel views. While dynamic Gaussian Splatting offers an efficient representation, vanilla models optimize all Gaussian primitives uniformly, ignoring whether they are well or poorly observed. This limitation leads to motion drifts under occlusion and degraded synthesis when extrapolating to unseen views. We argue that uncertainty matters: Gaussians with recurring observations across views and time act as reliable anchors to guide motion, whereas those with limited visibility are treated as less reliable. To this end, we introduce USplat4D, a novel Uncertainty-aware dynamic Gaussian Splatting framework that propagates reliable motion cues to enhance 4D reconstruction. Our approach estimates time-varying per-Gaussian uncertainty and leverages it to construct a spatio-temporal graph for uncertainty-aware optimization. Experiments on diverse real and synthetic datasets show that explicitly modeling uncertainty consistently improves dynamic Gaussian Splatting models, yielding more stable geometry under occlusion and high-quality synthesis at extreme viewpoints.
翻译:从单目输入重建动态三维场景本质上是欠约束问题,遮挡和极端新视角会引发歧义。虽然动态高斯溅射提供了高效的表示方法,但原始模型对所有高斯基元进行均匀优化,忽略了其观测质量的差异。这一局限性导致遮挡条件下的运动漂移,以及在未见视角外推时合成质量下降。我们认为不确定性至关重要:在视角和时间维度上被反复观测的高斯基元可作为可靠的运动引导锚点,而可见性有限的基元则被视为低可靠性元素。为此,我们提出USplat4D——一种新颖的不确定性感知动态高斯溅射框架,通过传播可靠运动线索来增强4D重建。该方法估计时变的高斯基元不确定性,并利用其构建时空图进行不确定性感知优化。在多样化的真实与合成数据集上的实验表明,显式建模不确定性能够持续改进动态高斯溅射模型,在遮挡条件下产生更稳定的几何结构,并在极端视角下实现高质量的合成效果。