Most existing image tokenizers encode images into a fixed number of tokens or patches, overlooking the inherent variability in image complexity. To address this, we introduce Content-Adaptive Tokenizer (CAT), which dynamically adjusts representation capacity based on the image content and encodes simpler images into fewer tokens. We design a caption-based evaluation system that leverages large language models (LLMs) to predict content complexity and determine the optimal compression ratio for a given image, taking into account factors critical to human perception. Trained on images with diverse compression ratios, CAT demonstrates robust performance in image reconstruction. We also utilize its variable-length latent representations to train Diffusion Transformers (DiTs) for ImageNet generation. By optimizing token allocation, CAT improves the FID score over fixed-ratio baselines trained with the same flops and boosts the inference throughput by 18.5%.
翻译:现有图像标记化方法大多将图像编码为固定数量的标记或补丁,忽略了图像复杂度固有的可变性。为此,我们提出内容自适应标记器(CAT),它能够根据图像内容动态调整表示容量,并将较简单的图像编码为更少的标记。我们设计了一个基于描述的评估系统,该系统利用大语言模型(LLM)预测内容复杂度,并针对给定图像确定最佳压缩比,同时考虑对人类感知至关重要的因素。CAT在不同压缩比的图像上进行训练,在图像重建中展现出鲁棒性能。我们还利用其可变长度潜在表示来训练扩散变换器(DiT)以生成ImageNet图像。通过优化标记分配,CAT在相同浮点运算次数下训练的固定比率基线模型上改善了FID分数,并将推理吞吐量提升了18.5%。