The recent development of 3D Gaussian Splatting (3DGS) has led to great interest in 4D dynamic spatial reconstruction from multi-view visual inputs. While existing approaches mainly rely on processing full-length multi-view videos for 4D reconstruction, there has been limited exploration of iterative online reconstruction methods that enable on-the-fly training and per-frame streaming. Current 3DGS-based streaming methods treat the Gaussian primitives uniformly and constantly renew the densified Gaussians, thereby overlooking the difference between dynamic and static features and also neglecting the temporal continuity in the scene. To address these limitations, we propose a novel three-stage pipeline for iterative streamable 4D dynamic spatial reconstruction. Our pipeline comprises a selective inheritance stage to preserve temporal continuity, a dynamics-aware shift stage for distinguishing dynamic and static primitives and optimizing their movements, and an error-guided densification stage to accommodate emerging objects. Our method achieves state-of-the-art performance in online 4D reconstruction, demonstrating a 20% improvement in on-the-fly training speed, superior representation quality, and real-time rendering capability. Project page: https://www.liuzhening.top/DASS
翻译:近年来,三维高斯溅射(3DGS)的发展使得从多视角视觉输入进行四维动态空间重建引起了广泛关注。现有方法主要依赖处理完整的多视角视频进行4D重建,而对支持即时训练与逐帧流式处理的迭代式在线重建方法的探索仍较为有限。当前基于3DGS的流式方法对高斯基元进行统一处理并持续更新稠密化高斯,因而忽视了动态与静态特征之间的差异,同时忽略了场景中的时间连续性。为克服这些局限性,我们提出了一种新颖的三阶段流水线,用于迭代式可流式4D动态空间重建。该流水线包含:保持时间连续性的选择性继承阶段、区分动态与静态基元并优化其运动的动态感知偏移阶段,以及适应新出现物体的误差引导稠密化阶段。我们的方法在在线4D重建中实现了最先进的性能,在即时训练速度上提升了20%,同时具有优异的表示质量与实时渲染能力。项目页面:https://www.liuzhening.top/DASS