We present Visual AutoRegressive modeling (VAR), a new generation paradigm that redefines the autoregressive learning on images as coarse-to-fine "next-scale prediction" or "next-resolution prediction", diverging from the standard raster-scan "next-token prediction". This simple, intuitive methodology allows autoregressive (AR) transformers to learn visual distributions fast and generalize well: VAR, for the first time, makes GPT-like AR models surpass diffusion transformers in image generation. On ImageNet 256x256 benchmark, VAR significantly improve AR baseline by improving Frechet inception distance (FID) from 18.65 to 1.73, inception score (IS) from 80.4 to 350.2, with around 20x faster inference speed. It is also empirically verified that VAR outperforms the Diffusion Transformer (DiT) in multiple dimensions including image quality, inference speed, data efficiency, and scalability. Scaling up VAR models exhibits clear power-law scaling laws similar to those observed in LLMs, with linear correlation coefficients near -0.998 as solid evidence. VAR further showcases zero-shot generalization ability in downstream tasks including image in-painting, out-painting, and editing. These results suggest VAR has initially emulated the two important properties of LLMs: Scaling Laws and zero-shot task generalization. We have released all models and codes to promote the exploration of AR/VAR models for visual generation and unified learning.
翻译:本文提出视觉自回归建模(VAR),这是一种新一代范式,将图像上的自回归学习重新定义为从粗到细的“下一尺度预测”或“下一分辨率预测”,有别于标准的光栅扫描式“下一标记预测”。这种简单直观的方法使自回归(AR)Transformer能够快速学习视觉分布并具备良好的泛化能力:VAR首次使类GPT的自回归模型在图像生成任务上超越扩散Transformer。在ImageNet 256×256基准测试中,VAR显著改进了自回归基线,将弗雷歇起始距离(FID)从18.65提升至1.73,起始分数(IS)从80.4提升至350.2,推理速度提升约20倍。实验验证表明,VAR在图像质量、推理速度、数据效率和可扩展性等多个维度上均优于扩散Transformer(DiT)。扩大VAR模型规模时观察到与大型语言模型中相似且明确的能力缩放定律,线性相关系数接近-0.998,这提供了有力证据。VAR在下游任务中进一步展现出零样本泛化能力,包括图像修复、外绘和编辑。这些结果表明VAR已初步具备大型语言模型的两个重要特性:缩放定律和零样本任务泛化能力。我们已开源所有模型与代码,以促进AR/VAR模型在视觉生成与统一学习领域的探索。