There exists recent work in computer vision, named VAR, that proposes a new autoregressive paradigm for image generation. Diverging from the vanilla next-token prediction, VAR structurally reformulates the image generation into a coarse to fine next-scale prediction. In this paper, we show that this scale-wise autoregressive framework can be effectively decoupled into \textit{intra-scale modeling}, which captures local spatial dependencies within each scale, and \textit{inter-scale modeling}, which models cross-scale relationships progressively from coarse-to-fine scales. This decoupling structure allows to rebuild VAR in a more computationally efficient manner. Specifically, for intra-scale modeling -- crucial for generating high-fidelity images -- we retain the original bidirectional self-attention design to ensure comprehensive modeling; for inter-scale modeling, which semantically connects different scales but is computationally intensive, we apply linear-complexity mechanisms like Mamba to substantially reduce computational overhead. We term this new framework M-VAR. Extensive experiments demonstrate that our method outperforms existing models in both image quality and generation speed. For example, our 1.5B model, with fewer parameters and faster inference speed, outperforms the largest VAR-d30-2B. Moreover, our largest model M-VAR-d32 impressively registers 1.78 FID on ImageNet 256$\times$256 and outperforms the prior-art autoregressive models LlamaGen/VAR by 0.4/0.19 and popular diffusion models LDM/DiT by 1.82/0.49, respectively. Code is avaiable at \url{https://github.com/OliverRensu/MVAR}.
翻译:近期计算机视觉领域出现了一项名为VAR的工作,它提出了一种新的图像生成自回归范式。不同于传统的下一令牌预测,VAR在结构上将图像生成重新表述为从粗到细的下一尺度预测。本文表明,这种逐尺度自回归框架可以有效地解耦为\textit{尺度内建模}(捕捉每个尺度内的局部空间依赖)和\textit{尺度间建模}(从粗到细尺度逐步建模跨尺度关系)。这种解耦结构使得能够以更高的计算效率重建VAR。具体而言,对于生成高保真图像至关重要的尺度内建模,我们保留了原始的双向自注意力设计以确保全面建模;对于语义上连接不同尺度但计算密集的尺度间建模,我们应用如Mamba等线性复杂度机制来显著降低计算开销。我们将这一新框架命名为M-VAR。大量实验表明,我们的方法在图像质量和生成速度上均优于现有模型。例如,我们的15亿参数模型以更少的参数和更快的推理速度,超越了最大的VAR-d30-20亿参数模型。此外,我们最大的模型M-VAR-d32在ImageNet 256$\times$256数据集上取得了1.78的FID分数,分别以0.4/0.19的优势超越了先前最佳的自回归模型LlamaGen/VAR,并以1.82/0.49的优势超越了流行的扩散模型LDM/DiT。代码发布于\url{https://github.com/OliverRensu/MVAR}。