Generating dynamic 3D object from a single-view video is challenging due to the lack of 4D labeled data. An intuitive approach is to extend previous image-to-3D pipelines by transferring off-the-shelf image generation models such as score distillation sampling.However, this approach would be slow and expensive to scale due to the need for back-propagating the information-limited supervision signals through a large pretrained model. To address this, we propose an efficient video-to-4D object generation framework called Efficient4D. It generates high-quality spacetime-consistent images under different camera views, and then uses them as labeled data to directly reconstruct the 4D content through a 4D Gaussian splatting model. Importantly, our method can achieve real-time rendering under continuous camera trajectories. To enable robust reconstruction under sparse views, we introduce inconsistency-aware confidence-weighted loss design, along with a lightly weighted score distillation loss. Extensive experiments on both synthetic and real videos show that Efficient4D offers a remarkable 10-fold increase in speed when compared to prior art alternatives while preserving the quality of novel view synthesis. For example, Efficient4D takes only 10 minutes to model a dynamic object, vs 120 minutes by the previous art model Consistent4D.
翻译:从单视角视频生成动态三维物体因缺乏四维标注数据而极具挑战性。一种直观方法是扩展现有的图像到三维生成流程,通过迁移现成的图像生成模型(如分数蒸馏采样)来实现。然而,由于需要将信息有限的监督信号通过大型预训练模型进行反向传播,该方法存在速度慢、扩展成本高的问题。为此,我们提出了一种高效的视频到四维物体生成框架Efficient4D。该框架首先生成不同相机视角下的高质量时空一致图像,随后将其作为标注数据,通过四维高斯溅射模型直接重建四维内容。值得注意的是,本方法能够在连续相机轨迹下实现实时渲染。为在稀疏视角下实现鲁棒重建,我们引入了不一致性感知的置信加权损失设计,并辅以轻量化的分数蒸馏损失。在合成视频与真实视频上的大量实验表明,Efficient4D在保持新视角合成质量的同时,相比现有先进方法实现了十倍的速度提升。例如,Efficient4D仅需10分钟即可完成动态物体建模,而先前先进模型Consistent4D需耗时120分钟。