Panoramic image stitching provides a unified, wide-angle view of a scene that extends beyond the camera's field of view. Stitching frames of a panning video into a panoramic photograph is a well-understood problem for stationary scenes, but when objects are moving, a still panorama cannot capture the scene. We present a method for synthesizing a panoramic video from a casually-captured panning video, as if the original video were captured with a wide-angle camera. We pose panorama synthesis as a space-time outpainting problem, where we aim to create a full panoramic video of the same length as the input video. Consistent completion of the space-time volume requires a powerful, realistic prior over video content and motion, for which we adapt generative video models. Existing generative models do not, however, immediately extend to panorama completion, as we show. We instead apply video generation as a component of our panorama synthesis system, and demonstrate how to exploit the strengths of the models while minimizing their limitations. Our system can create video panoramas for a range of in-the-wild scenes including people, vehicles, and flowing water, as well as stationary background features.
翻译:全景图像拼接技术提供了超越相机视野范围的统一广角场景视图。对于静态场景,将平移视频的帧拼接成全景照片是一个已被充分理解的问题;然而,当场景中存在运动物体时,静态全景图无法完整捕捉动态信息。本文提出一种从随意拍摄的平移视频中合成全景视频的方法,其效果如同原始视频是由广角相机拍摄而成。我们将全景合成问题构建为时空外绘任务,旨在生成与输入视频等长的完整全景视频。为保持时空体积的一致性补全,需要具备对视频内容与运动的强大且真实的先验知识,为此我们适配了生成式视频模型。然而,如本文所示,现有生成模型并不能直接扩展至全景补全任务。我们转而将视频生成作为全景合成系统的组成部分,并展示了如何充分利用模型优势同时最小化其局限性。本系统可为包括人物、车辆、流水等多种真实场景以及静态背景特征生成全景视频。