Recent techniques for text-to-4D generation synthesize dynamic 3D scenes using supervision from pre-trained text-to-video models. However, existing representations for motion, such as deformation models or time-dependent neural representations, are limited in the amount of motion they can generate-they cannot synthesize motion extending far beyond the bounding box used for volume rendering. The lack of a more flexible motion model contributes to the gap in realism between 4D generation methods and recent, near-photorealistic video generation models. Here, we propose TC4D: trajectory-conditioned text-to-4D generation, which factors motion into global and local components. We represent the global motion of a scene's bounding box using rigid transformation along a trajectory parameterized by a spline. We learn local deformations that conform to the global trajectory using supervision from a text-to-video model. Our approach enables the synthesis of scenes animated along arbitrary trajectories, compositional scene generation, and significant improvements to the realism and amount of generated motion, which we evaluate qualitatively and through a user study. Video results can be viewed on our website: https://sherwinbahmani.github.io/tc4d.
翻译:近期文本到四维生成技术通过预训练文本到视频模型的监督信号合成动态三维场景。然而,现有运动表示方法(如形变模型或时变神经表示)在可生成的运动幅度方面存在局限——它们无法合成超出体渲染边界框范围的扩展运动。这种灵活运动模型的缺失导致了四维生成方法与近期接近照片级真实感的视频生成模型之间的真实感差距。本文提出TC4D:基于轨迹条件的文本到四维生成,该方法将运动分解为全局与局部分量。我们利用样条参数化轨迹的刚性变换表示场景边界框的全局运动,并通过文本到视频模型的监督信号学习符合全局轨迹的局部形变。本方法能够实现沿任意轨迹运动的场景合成、组合式场景生成,并在生成运动的真实性与幅度上取得显著提升——我们通过定性评估与用户研究验证了这些改进。视频结果可访问项目网站查看:https://sherwinbahmani.github.io/tc4d。