Recent advancements in auto-regressive large language models (LLMs) have led to their application in video generation. This paper explores the use of Large Vision Models (LVMs) for video continuation, a task essential for building world models and predicting future frames. We introduce ARCON, a scheme that alternates between generating semantic and RGB tokens, allowing the LVM to explicitly learn high-level structural video information. We find high consistency in the RGB images and semantic maps generated without special design. Moreover, we employ an optical flow-based texture stitching method to enhance visual quality. Experiments in autonomous driving scenarios show that our model can consistently generate long videos.
翻译:近年来,自回归大语言模型(LLMs)的进展已促使其应用于视频生成领域。本文探索了利用大视觉模型(LVMs)进行视频延续任务,该任务对于构建世界模型和预测未来帧至关重要。我们提出了ARCON方案,该方案交替生成语义标记和RGB标记,使LVM能够显式地学习高层结构化的视频信息。我们发现,无需特殊设计,生成的RGB图像与语义图之间即具有高度一致性。此外,我们采用了一种基于光流的纹理拼接方法来提升视觉质量。在自动驾驶场景中的实验表明,我们的模型能够持续生成长视频。