Text-to-video models have recently undergone rapid and substantial advancements. Nevertheless, due to limitations in data and computational resources, achieving efficient generation of long videos with rich motion dynamics remains a significant challenge. To generate high-quality, dynamic, and temporally consistent long videos, this paper presents ARLON, a novel framework that boosts diffusion Transformers with autoregressive models for long video generation, by integrating the coarse spatial and long-range temporal information provided by the AR model to guide the DiT model. Specifically, ARLON incorporates several key innovations: 1) A latent Vector Quantized Variational Autoencoder (VQ-VAE) compresses the input latent space of the DiT model into compact visual tokens, bridging the AR and DiT models and balancing the learning complexity and information density; 2) An adaptive norm-based semantic injection module integrates the coarse discrete visual units from the AR model into the DiT model, ensuring effective guidance during video generation; 3) To enhance the tolerance capability of noise introduced from the AR inference, the DiT model is trained with coarser visual latent tokens incorporated with an uncertainty sampling module. Experimental results demonstrate that ARLON significantly outperforms the baseline OpenSora-V1.2 on eight out of eleven metrics selected from VBench, with notable improvements in dynamic degree and aesthetic quality, while delivering competitive results on the remaining three and simultaneously accelerating the generation process. In addition, ARLON achieves state-of-the-art performance in long video generation. Detailed analyses of the improvements in inference efficiency are presented, alongside a practical application that demonstrates the generation of long videos using progressive text prompts. See demos of ARLON at http://aka.ms/arlon.
翻译:文本到视频模型近期取得了快速且显著的进展。然而,由于数据和计算资源的限制,实现具有丰富运动动态的长视频高效生成仍是一项重大挑战。为生成高质量、动态且时序一致的长视频,本文提出ARLON——一种利用自回归模型增强扩散Transformer实现长视频生成的新型框架,通过整合AR模型提供的粗略空间与长程时序信息来指导DiT模型。具体而言,ARLON包含以下关键创新:1)潜在向量量化变分自编码器(VQ-VAE)将DiT模型的输入潜在空间压缩为紧凑的视觉标记,从而桥接AR与DiT模型并平衡学习复杂度与信息密度;2)基于自适应范数的语义注入模块将AR模型生成的粗略离散视觉单元整合到DiT模型中,确保视频生成过程中的有效引导;3)为增强模型对AR推理引入噪声的容忍能力,DiT模型在训练时结合了更粗糙的视觉潜在标记与不确定性采样模块。实验结果表明,ARLON在VBench选取的十一项指标中有八项显著优于基线模型OpenSora-V1.2,在动态程度和美学质量方面提升尤为明显,其余三项指标保持竞争力,同时加速了生成过程。此外,ARLON在长视频生成任务中达到了最先进的性能。本文详细分析了推理效率的提升,并通过渐进式文本提示生成长视频的实际应用进行了展示。ARLON演示可见:http://aka.ms/arlon。