We introduce STAR, a text-to-image model that employs a scale-wise auto-regressive paradigm. Unlike VAR, which is constrained to class-conditioned synthesis for images up to 256$\times$256, STAR enables text-driven image generation up to 1024$\times$1024 through three key designs. First, we introduce a pre-trained text encoder to extract and adopt representations for textual constraints, enhancing details and generalizability. Second, given the inherent structural correlation across different scales, we leverage 2D Rotary Positional Encoding (RoPE) and tweak it into a normalized version, ensuring consistent interpretation of relative positions across token maps and stabilizing the training process. Third, we observe that simultaneously sampling all tokens within a single scale can disrupt inter-token relationships, leading to structural instability, particularly in high-resolution generation. To address this, we propose a novel stable sampling method that incorporates causal relationships into the sampling process, ensuring both rich details and stable structures. Compared to previous diffusion models and auto-regressive models, STAR surpasses existing benchmarks in fidelity, text-image consistency, and aesthetic quality, requiring just 2.21s for 1024$\times$1024 images on A100. This highlights the potential of auto-regressive methods in high-quality image synthesis, offering new directions for the text-to-image generation.
翻译:我们提出了STAR,一种采用尺度级联自回归范式的文本到图像生成模型。与VAR模型局限于类别条件合成且最高仅支持256×256分辨率图像不同,STAR通过三项关键设计实现了高达1024×1024分辨率的文本驱动图像生成。首先,我们引入预训练文本编码器来提取并利用文本约束的表征,从而增强细节表现与泛化能力。其次,考虑到不同尺度间固有的结构相关性,我们采用二维旋转位置编码(RoPE)并将其调整为归一化版本,确保跨令牌映射的相对位置解释一致性,从而稳定训练过程。第三,我们观察到在同一尺度内同时采样所有令牌会破坏令牌间关联性,导致结构不稳定,在高分辨率生成中尤为明显。为此,我们提出一种新颖的稳定采样方法,将因果关系融入采样过程,从而同时保证丰富的细节与稳定的结构。与先前的扩散模型和自回归模型相比,STAR在保真度、图文一致性及美学质量方面均超越现有基准,在A100显卡上仅需2.21秒即可生成1024×1024图像。这彰显了自回归方法在高质量图像合成中的潜力,为文本到图像生成领域提供了新的研究方向。