Autoregressive (AR) models have reformulated image generation as next-token prediction, demonstrating remarkable potential and emerging as strong competitors to diffusion models. However, control-to-image generation, akin to ControlNet, remains largely unexplored within AR models. Although a natural approach, inspired by advancements in Large Language Models, is to tokenize control images into tokens and prefill them into the autoregressive model before decoding image tokens, it still falls short in generation quality compared to ControlNet and suffers from inefficiency. To this end, we introduce ControlAR, an efficient and effective framework for integrating spatial controls into autoregressive image generation models. Firstly, we explore control encoding for AR models and propose a lightweight control encoder to transform spatial inputs (e.g., canny edges or depth maps) into control tokens. Then ControlAR exploits the conditional decoding method to generate the next image token conditioned on the per-token fusion between control and image tokens, similar to positional encodings. Compared to prefilling tokens, using conditional decoding significantly strengthens the control capability of AR models but also maintains the model's efficiency. Furthermore, the proposed ControlAR surprisingly empowers AR models with arbitrary-resolution image generation via conditional decoding and specific controls. Extensive experiments can demonstrate the controllability of the proposed ControlAR for the autoregressive control-to-image generation across diverse inputs, including edges, depths, and segmentation masks. Furthermore, both quantitative and qualitative results indicate that ControlAR surpasses previous state-of-the-art controllable diffusion models, e.g., ControlNet++. Code, models, and demo will soon be available at https://github.com/hustvl/ControlAR.
翻译:自回归模型将图像生成重新定义为下一个令牌预测任务,展现出卓越潜力并成为扩散模型的有力竞争者。然而,类似于ControlNet的"控制到图像"生成方法在自回归模型中仍未得到充分探索。尽管受大语言模型进展启发,将控制图像令牌化并预填充至自回归模型再解码图像令牌是自然思路,但该方法在生成质量上仍不及ControlNet且效率低下。为此,我们提出ControlAR——一种高效集成空间控制的自回归图像生成框架。首先,我们探索自回归模型的控制编码机制,设计轻量级控制编码器将空间输入转换为控制令牌。随后ControlAR采用条件解码方法,通过控制令牌与图像令牌的逐令牌融合生成后续图像令牌,其机制类似于位置编码。相较于令牌预填充策略,条件解码显著增强了自回归模型的控制能力,同时保持了模型效率。此外,ControlAR通过条件解码与特定控制机制,意外实现了任意分辨率图像生成能力。大量实验证明,ControlAR在边缘图、深度图、分割掩码等多种输入条件下均能实现高质量的自回归可控图像生成。定量与定性结果表明,ControlAR在可控性方面超越了当前最先进的可控扩散模型。代码、模型及演示即将发布于https://github.com/hustvl/ControlAR。