Previous surface reconstruction methods either suffer from low geometric accuracy or lengthy training times when dealing with real-world complex dynamic scenes involving multi-person activities, and human-object interactions. To tackle the dynamic contents and the occlusions in complex scenes, we present a space-time 2D Gaussian Splatting approach. Specifically, to improve geometric quality in dynamic scenes, we learn canonical 2D Gaussian splats and deform these 2D Gaussian splats while enforcing the disks of the Gaussian located on the surface of the objects by introducing depth and normal regularizers. Further, to tackle the occlusion issues in complex scenes, we introduce a compositional opacity deformation strategy, which further reduces the surface recovery of those occluded areas. Experiments on real-world sparse-view video datasets and monocular dynamic datasets demonstrate that our reconstructions outperform state-of-the-art methods, especially for the surface of the details. The project page and more visualizations can be found at: https://tb2-sy.github.io/st-2dgs/.
翻译:先前针对现实世界复杂动态场景(涉及多人活动及人-物交互)的表面重建方法,往往存在几何精度不足或训练耗时过长的问题。为应对复杂场景中的动态内容与遮挡问题,本文提出一种时空二维高斯泼溅方法。具体而言,为提升动态场景的几何质量,我们通过学习规范化的二维高斯泼溅并对其进行形变,同时通过引入深度与法向正则化器,强制使高斯圆盘位于物体表面。此外,针对复杂场景中的遮挡问题,我们提出组合式不透明度形变策略,进一步抑制被遮挡区域的表面重建误差。在真实世界稀疏视角视频数据集与单目动态数据集上的实验表明,我们的重建效果优于现有先进方法,尤其在细节表面重建方面表现突出。项目页面与更多可视化结果详见:https://tb2-sy.github.io/st-2dgs/。