We introduce 4D Motion Scaffolds (MoSca), a modern 4D reconstruction system designed to reconstruct and synthesize novel views of dynamic scenes from monocular videos captured casually in the wild. To address such a challenging and ill-posed inverse problem, we leverage prior knowledge from foundational vision models and lift the video data to a novel Motion Scaffold (MoSca) representation, which compactly and smoothly encodes the underlying motions/deformations. The scene geometry and appearance are then disentangled from the deformation field and are encoded by globally fusing the Gaussians anchored onto the MoSca and optimized via Gaussian Splatting. Additionally, camera focal length and poses can be solved using bundle adjustment without the need of any other pose estimation tools. Experiments demonstrate state-of-the-art performance on dynamic rendering benchmarks and its effectiveness on real videos.
翻译:我们提出了4D运动支架(MoSca),这是一种现代化的4D重建系统,旨在从野外随意拍摄的单目视频中重建动态场景并合成新视角。为解决这一具有挑战性的不适定逆问题,我们利用基础视觉模型的先验知识,将视频数据提升至新颖的运动支架(MoSca)表示,该表示能紧凑而平滑地编码底层运动/形变。场景几何与外观随后从形变场中解耦,通过全局融合锚定在MoSca上的高斯模型并借助高斯泼溅进行优化编码。此外,相机焦距与位姿可通过光束法平差求解,无需依赖其他位姿估计工具。实验表明,本方法在动态渲染基准测试中达到最先进性能,并在真实视频中展现卓越有效性。