Immersive applications call for synthesizing spatiotemporal 4D content from casual videos without costly 3D supervision. Existing video-to-4D methods typically rely on manually annotated camera poses, which are labor-intensive and brittle for in-the-wild footage. Recent warp-then-inpaint approaches mitigate the need for pose labels by warping input frames along a novel camera trajectory and using an inpainting model to fill missing regions, thereby depicting the 4D scene from diverse viewpoints. However, this trajectory-to-trajectory formulation often entangles camera motion with scene dynamics and complicates both modeling and inference. We introduce SEE4D, a pose-free, trajectory-to-camera framework that replaces explicit trajectory prediction with rendering to a bank of fixed virtual cameras, thereby separating camera control from scene modeling. A view-conditional video inpainting model is trained to learn a robust geometry prior by denoising realistically synthesized warped images and to inpaint occluded or missing regions across virtual viewpoints, eliminating the need for explicit 3D annotations. Building on this inpainting core, we design a spatiotemporal autoregressive inference pipeline that traverses virtual-camera splines and extends videos with overlapping windows, enabling coherent generation at bounded per-step complexity. We validate See4D on cross-view video generation and sparse reconstruction benchmarks. Across quantitative metrics and qualitative assessments, our method achieves superior generalization and improved performance relative to pose- or trajectory-conditioned baselines, advancing practical 4D world modeling from casual videos.
翻译:沉浸式应用需要从非专业视频中合成时空四维内容,而无需昂贵的三维监督。现有的视频到四维方法通常依赖人工标注的相机姿态,这对于野外拍摄的视频既费时又脆弱。最近的扭曲后修复方法通过沿新相机轨迹扭曲输入帧,并利用修复模型填充缺失区域,从而从不同视角描绘四维场景,减轻了对姿态标签的依赖。然而,这种轨迹到轨迹的建模方式常将相机运动与场景动态耦合,使建模和推断复杂化。我们提出SEE4D,一种无姿态的轨迹到相机框架,通过渲染到一组固定的虚拟相机来替代显式轨迹预测,从而将相机控制与场景建模分离。我们训练了一个视角条件视频修复模型,通过对真实合成的扭曲图像进行去噪来学习稳健的几何先验,并在虚拟视角间修复被遮挡或缺失的区域,无需显式的三维标注。基于此修复核心,我们设计了一个时空自回归推断流程,该流程遍历虚拟相机样条曲线并通过重叠窗口扩展视频,实现了在有限单步复杂度下的连贯生成。我们在跨视角视频生成和稀疏重建基准上验证了SEE4D。定量指标与定性评估均表明,相较于依赖姿态或轨迹条件的基线方法,我们的方法实现了更优的泛化能力和性能提升,推动了从非专业视频进行实用四维世界建模的进展。