Recent image generation schemes typically capture image distribution in a pre-constructed latent space relying on a frozen image tokenizer. Though the performance of tokenizer plays an essential role to the successful generation, its current evaluation metrics (e.g. rFID) fail to precisely assess the tokenizer and correlate its performance to the generation quality (e.g. gFID). In this paper, we comprehensively analyze the reason for the discrepancy of reconstruction and generation qualities in a discrete latent space, and, from which, we propose a novel plug-and-play tokenizer training scheme to facilitate latent space construction. Specifically, a latent perturbation approach is proposed to simulate sampling noises, i.e., the unexpected tokens sampled, from the generative process. With the latent perturbation, we further propose (1) a novel tokenizer evaluation metric, i.e., pFID, which successfully correlates the tokenizer performance to generation quality and (2) a plug-and-play tokenizer training scheme, which significantly enhances the robustness of tokenizer thus boosting the generation quality and convergence speed. Extensive benchmarking are conducted with 11 advanced discrete image tokenizers with 2 autoregressive generation models to validate our approach. The tokenizer trained with our proposed latent perturbation achieve a notable 1.60 gFID with classifier-free guidance (CFG) and 3.45 gFID without CFG with a $\sim$400M generator. Code: https://github.com/lxa9867/ImageFolder.
翻译:当前图像生成方案通常依赖于冻结的图像分词器,在预先构建的隐空间中捕获图像分布。尽管分词器的性能对生成成功至关重要,其现有评估指标(如rFID)无法精确评估分词器性能,亦未能将其表现与生成质量(如gFID)有效关联。本文系统分析了离散隐空间中重建质量与生成质量存在差异的原因,并据此提出一种新颖的即插即用分词器训练方案以优化隐空间构建。具体而言,我们提出一种隐空间扰动方法,用于模拟生成过程中产生的采样噪声(即意外采样的标记)。基于该隐空间扰动,我们进一步提出:(1)新型分词器评估指标pFID,成功建立分词器性能与生成质量之间的关联;(2)即插即用分词器训练方案,显著增强分词器的鲁棒性,从而提升生成质量与收敛速度。我们采用11种先进离散图像分词器与2种自回归生成模型进行广泛基准测试以验证所提方法。经隐空间扰动训练的分词器在使用无分类器引导(CFG)时达到1.60 gFID,未使用CFG时达到3.45 gFID(生成器参数量约4亿)。代码:https://github.com/lxa9867/ImageFolder。