Accurate reconstruction of complex dynamic scenes from just a single viewpoint continues to be a challenging task in computer vision. Current dynamic novel view synthesis methods typically require videos from many different camera viewpoints, necessitating careful recording setups, and significantly restricting their utility in the wild as well as in terms of embodied AI applications. In this paper, we propose $\textbf{GCD}$, a controllable monocular dynamic view synthesis pipeline that leverages large-scale diffusion priors to, given a video of any scene, generate a synchronous video from any other chosen perspective, conditioned on a set of relative camera pose parameters. Our model does not require depth as input, and does not explicitly model 3D scene geometry, instead performing end-to-end video-to-video translation in order to achieve its goal efficiently. Despite being trained on synthetic multi-view video data only, zero-shot real-world generalization experiments show promising results in multiple domains, including robotics, object permanence, and driving environments. We believe our framework can potentially unlock powerful applications in rich dynamic scene understanding, perception for robotics, and interactive 3D video viewing experiences for virtual reality.
翻译:仅从单一视点精确重建复杂动态场景,仍然是计算机视觉领域的一项挑战性任务。当前的动态新视角合成方法通常需要来自多个不同摄像机视角的视频,这要求精心的录制设置,并极大地限制了其在野外环境以及具身人工智能应用中的实用性。在本文中,我们提出 $\textbf{GCD}$,一种可控的单目动态视图合成流程。该流程利用大规模扩散先验,能够在给定任意场景视频的条件下,根据一组相对相机姿态参数,生成来自任意其他选定视角的同步视频。我们的模型不需要深度信息作为输入,也不显式地对3D场景几何进行建模,而是通过端到端的视频到视频转换来高效地实现其目标。尽管仅在合成的多视角视频数据上进行训练,零样本真实世界泛化实验在多个领域(包括机器人学、物体恒存性和驾驶环境)都显示出有希望的结果。我们相信,我们的框架有潜力在丰富的动态场景理解、机器人感知以及虚拟现实的交互式3D视频观看体验中解锁强大的应用。