Due to the fascinating generative performance of text-to-image diffusion models, growing text-to-3D generation works explore distilling the 2D generative priors into 3D, using the score distillation sampling (SDS) loss, to bypass the data scarcity problem. The existing text-to-3D methods have achieved promising results in realism and 3D consistency, but text-to-4D generation still faces challenges, including lack of realism and insufficient dynamic motions. In this paper, we propose a novel method for text-to-4D generation, which ensures the dynamic amplitude and authenticity through direct supervision provided by a video prior. Specifically, we adopt a text-to-video diffusion model to generate a reference video and divide 4D generation into two stages: static generation and dynamic generation. The static 3D generation is achieved under the guidance of the input text and the first frame of the reference video, while in the dynamic generation stage, we introduce a customized SDS loss to ensure multi-view consistency, a video-based SDS loss to improve temporal consistency, and most importantly, direct priors from the reference video to ensure the quality of geometry and texture. Moreover, we design a prior-switching training strategy to avoid conflicts between different priors and fully leverage the benefits of each prior. In addition, to enrich the generated motion, we further introduce a dynamic modeling representation composed of a deformation network and a topology network, which ensures dynamic continuity while modeling topological changes. Our method not only supports text-to-4D generation but also enables 4D generation from monocular videos. The comparison experiments demonstrate the superiority of our method compared to existing methods.
翻译:得益于文本到图像扩散模型的卓越生成性能,越来越多的文本到三维生成工作探索通过分数蒸馏采样损失将二维生成先验提炼至三维,以规避数据稀缺问题。现有文本到三维方法在真实感与三维一致性方面已取得显著成果,但文本到四维生成仍面临真实感缺失与动态运动不足等挑战。本文提出一种新颖的文本到四维生成方法,通过视频先验提供的直接监督确保动态幅度与真实性。具体而言,我们采用文本到视频扩散模型生成参考视频,并将四维生成划分为静态生成与动态生成两个阶段:静态三维生成在输入文本与参考视频首帧的引导下实现;而在动态生成阶段,我们引入定制化的分数蒸馏采样损失确保多视角一致性,基于视频的分数蒸馏采样损失提升时序一致性,并关键性地利用参考视频的直接先验保障几何与纹理质量。此外,我们设计了先验切换训练策略以避免不同先验间的冲突,并充分融合各类先验优势。为丰富生成运动,我们进一步提出由变形网络与拓扑网络构成的动态建模表示,在建模拓扑变化的同时确保动态连续性。本方法不仅支持文本到四维生成,还可实现单目视频驱动的四维生成。对比实验验证了本方法相较于现有方法的优越性。