Recent advancements in video generation have primarily leveraged diffusion models for short-duration content. However, these approaches often fall short in modeling complex narratives and maintaining character consistency over extended periods, which is essential for long-form video production like movies. We propose MovieDreamer, a novel hierarchical framework that integrates the strengths of autoregressive models with diffusion-based rendering to pioneer long-duration video generation with intricate plot progressions and high visual fidelity. Our approach utilizes autoregressive models for global narrative coherence, predicting sequences of visual tokens that are subsequently transformed into high-quality video frames through diffusion rendering. This method is akin to traditional movie production processes, where complex stories are factorized down into manageable scene capturing. Further, we employ a multimodal script that enriches scene descriptions with detailed character information and visual style, enhancing continuity and character identity across scenes. We present extensive experiments across various movie genres, demonstrating that our approach not only achieves superior visual and narrative quality but also effectively extends the duration of generated content significantly beyond current capabilities. Homepage: https://aim-uofa.github.io/MovieDreamer/.
翻译:近期视频生成领域的进展主要依赖于扩散模型生成短时内容。然而,这些方法在建模复杂叙事和保持长时间角色一致性方面存在不足,而这对于电影等长视频制作至关重要。我们提出MovieDreamer,一种新颖的分层框架,它结合了自回归模型与基于扩散的渲染技术优势,率先实现了具有复杂情节推进和高视觉保真度的长时视频生成。我们的方法利用自回归模型确保全局叙事连贯性,预测视觉标记序列,随后通过扩散渲染将其转化为高质量视频帧。该方法类似于传统电影制作流程,即将复杂故事分解为可管理的场景拍摄。此外,我们采用多模态脚本,通过详细角色信息和视觉风格丰富场景描述,从而增强跨场景的连续性与角色身份一致性。我们在多种电影类型上进行了广泛实验,结果表明所提方法不仅实现了卓越的视觉与叙事质量,还能将生成内容的时长显著超越现有能力范围。项目主页:https://aim-uofa.github.io/MovieDreamer/。