Numerous works have recently integrated 3D camera control into foundational text-to-video models, but the resulting camera control is often imprecise, and video generation quality suffers. In this work, we analyze camera motion from a first principles perspective, uncovering insights that enable precise 3D camera manipulation without compromising synthesis quality. First, we determine that motion induced by camera movements in videos is low-frequency in nature. This motivates us to adjust train and test pose conditioning schedules, accelerating training convergence while improving visual and motion quality. Then, by probing the representations of an unconditional video diffusion transformer, we observe that they implicitly perform camera pose estimation under the hood, and only a sub-portion of their layers contain the camera information. This suggested us to limit the injection of camera conditioning to a subset of the architecture to prevent interference with other video features, leading to 4x reduction of training parameters, improved training speed and 10% higher visual quality. Finally, we complement the typical dataset for camera control learning with a curated dataset of 20K diverse dynamic videos with stationary cameras. This helps the model disambiguate the difference between camera and scene motion, and improves the dynamics of generated pose-conditioned videos. We compound these findings to design the Advanced 3D Camera Control (AC3D) architecture, the new state-of-the-art model for generative video modeling with camera control.
翻译:近期已有大量工作将三维相机控制集成到基础文本到视频模型中,但由此产生的相机控制往往不够精确,且视频生成质量会受到影响。在本工作中,我们从基本原理的角度分析相机运动,揭示了能够实现精确三维相机操控且不损害合成质量的见解。首先,我们确定视频中由相机运动引起的运动本质上是低频的。这促使我们调整训练和测试时的位姿条件调度策略,从而加速训练收敛并提升视觉与运动质量。其次,通过探究无条件视频扩散Transformer的表征,我们观察到其内部隐式地执行相机位姿估计,且仅部分层包含相机信息。这启发我们将相机条件注入限制在架构的一个子集中,以防止其干扰其他视频特征,从而将训练参数减少4倍,提高训练速度,并使视觉质量提升10%。最后,我们在典型的相机控制学习数据集基础上,补充了一个包含2万个多样化动态视频(相机静止)的精选数据集。这有助于模型区分相机运动与场景运动之间的差异,并改善了位姿条件生成视频的动态效果。我们综合这些发现,设计了高级三维相机控制(AC3D)架构,该架构是当前具备相机控制的生成式视频建模领域的最新最优模型。