Scaling up autoregressive models in vision has not proven as beneficial as in large language models. In this work, we investigate this scaling problem in the context of text-to-image generation, focusing on two critical factors: whether models use discrete or continuous tokens, and whether tokens are generated in a random or fixed raster order using BERT- or GPT-like transformer architectures. Our empirical results show that, while all models scale effectively in terms of validation loss, their evaluation performance -- measured by FID, GenEval score, and visual quality -- follows different trends. Models based on continuous tokens achieve significantly better visual quality than those using discrete tokens. Furthermore, the generation order and attention mechanisms significantly affect the GenEval score: random-order models achieve notably better GenEval scores compared to raster-order models. Inspired by these findings, we train Fluid, a random-order autoregressive model on continuous tokens. Fluid 10.5B model achieves a new state-of-the-art zero-shot FID of 6.16 on MS-COCO 30K, and 0.69 overall score on the GenEval benchmark. We hope our findings and results will encourage future efforts to further bridge the scaling gap between vision and language models.
翻译:在视觉领域扩展自回归模型尚未证明能像在大型语言模型中那样带来显著效益。本研究在文本到图像生成的背景下探讨这一扩展问题,重点关注两个关键因素:模型使用离散标记还是连续标记,以及标记是采用随机顺序还是固定光栅顺序生成(分别使用类BERT或类GPT的Transformer架构)。实验结果表明,虽然所有模型在验证损失方面都能有效扩展,但其评估性能——通过FID、GenEval分数和视觉质量衡量——呈现不同趋势。基于连续标记的模型比使用离散标记的模型获得显著更优的视觉质量。此外,生成顺序和注意力机制对GenEval分数有重要影响:随机顺序模型相比光栅顺序模型获得明显更好的GenEval分数。基于这些发现,我们训练了Fluid——一个在连续标记上运行的随机顺序自回归模型。Fluid 10.5B模型在MS-COCO 30K数据集上实现了6.16的零样本FID新最优结果,并在GenEval基准测试中获得0.69的综合分数。我们希望这些发现和结果能推动未来进一步弥合视觉与语言模型之间扩展差距的研究工作。