This paper presents a novel approach that enables autoregressive video generation with high efficiency. We propose to reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction and spatial set-by-set prediction. Unlike raster-scan prediction in prior autoregressive models or joint distribution modeling of fixed-length tokens in diffusion models, our approach maintains the causal property of GPT-style models for flexible in-context capabilities, while leveraging bidirectional modeling within individual frames for efficiency. With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA. Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity, i.e., 0.6B parameters. NOVA also outperforms state-of-the-art image diffusion models in text-to-image generation tasks, with a significantly lower training cost. Additionally, NOVA generalizes well across extended video durations and enables diverse zero-shot applications in one unified model. Code and models are publicly available at https://github.com/baaivision/NOVA.
翻译:本文提出了一种新颖的高效自回归视频生成方法。我们将视频生成问题重新表述为一种非量化的自回归建模,包含逐帧的时间预测和逐集的空间预测。与先前自回归模型中的光栅扫描预测或扩散模型中固定长度标记的联合分布建模不同,我们的方法保持了GPT风格模型的因果特性,以实现灵活的上下文内能力,同时在单个帧内利用双向建模以提高效率。基于所提出的方法,我们训练了一个无需向量量化的新型视频自回归模型,命名为NOVA。我们的结果表明,即使模型容量小得多(即6亿参数),NOVA在数据效率、推理速度、视觉保真度和视频流畅度方面均超越了先前的自回归视频模型。NOVA在文本到图像生成任务上也优于最先进的图像扩散模型,且训练成本显著更低。此外,NOVA在延长视频时长上泛化良好,并能在一个统一模型中实现多样化的零样本应用。代码和模型已在 https://github.com/baaivision/NOVA 公开。