Recently, Visual Autoregressive ($\mathsf{VAR}$) Models introduced a groundbreaking advancement in the field of image generation, offering a scalable approach through a coarse-to-fine "next-scale prediction" paradigm. However, the state-of-the-art algorithm of $\mathsf{VAR}$ models in [Tian, Jiang, Yuan, Peng and Wang, NeurIPS 2024] takes $O(n^4)$ time, which is computationally inefficient. In this work, we analyze the computational limits and efficiency criteria of $\mathsf{VAR}$ Models through a fine-grained complexity lens. Our key contribution is identifying the conditions under which $\mathsf{VAR}$ computations can achieve sub-quadratic time complexity. Specifically, we establish a critical threshold for the norm of input matrices used in $\mathsf{VAR}$ attention mechanisms. Above this threshold, assuming the Strong Exponential Time Hypothesis ($\mathsf{SETH}$) from fine-grained complexity theory, a sub-quartic time algorithm for $\mathsf{VAR}$ models is impossible. To substantiate our theoretical findings, we present efficient constructions leveraging low-rank approximations that align with the derived criteria. This work initiates the study of the computational efficiency of the $\mathsf{VAR}$ model from a theoretical perspective. Our technique will shed light on advancing scalable and efficient image generation in $\mathsf{VAR}$ frameworks.
翻译:近期,视觉自回归模型通过"由粗到细的下一尺度预测"范式,为图像生成领域引入了突破性进展,提供了一种可扩展的方法。然而,[Tian, Jiang, Yuan, Peng and Wang, NeurIPS 2024]中提出的当前最优$\mathsf{VAR}$模型算法需要$O(n^4)$时间,计算效率较低。本研究通过细粒度复杂度的视角,系统分析了$\mathsf{VAR}$模型的计算极限与效率准则。我们的核心贡献在于确定了$\mathsf{VAR}$计算能够实现亚二次时间复杂度的条件。具体而言,我们为$\mathsf{VAR}$注意力机制中使用的输入矩阵范数建立了一个关键阈值。若超过该阈值,基于细粒度复杂度理论中的强指数时间假设,则不可能存在亚四次时间的$\mathsf{VAR}$模型算法。为验证理论发现,我们提出了符合所推导准则的低秩近似高效构造方法。本工作首次从理论视角开启了$\mathsf{VAR}$模型计算效率的研究。我们的技术将为推进$\mathsf{VAR}$框架中可扩展且高效的图像生成提供重要启示。