We present Liquid, an auto-regressive generation paradigm that seamlessly integrates visual comprehension and generation by tokenizing images into discrete codes and learning these code embeddings alongside text tokens within a shared feature space for both vision and language. Unlike previous multimodal large language model (MLLM), Liquid achieves this integration using a single large language model (LLM), eliminating the need for external pretrained visual embeddings such as CLIP. For the first time, Liquid uncovers a scaling law that performance drop unavoidably brought by the unified training of visual and language tasks diminishes as the model size increases. Furthermore, the unified token space enables visual generation and comprehension tasks to mutually enhance each other, effectively removing the typical interference seen in earlier models. We show that existing LLMs can serve as strong foundations for Liquid, saving 100x in training costs while outperforming Chameleon in multimodal capabilities and maintaining language performance comparable to mainstream LLMs like LLAMA2. Liquid also outperforms models like SD v2.1 and SD-XL (FID of 5.47 on MJHQ-30K), excelling in both vision-language and text-only tasks. This work demonstrates that LLMs such as LLAMA3.2 and GEMMA2 are powerful multimodal generators, offering a scalable solution for enhancing both vision-language understanding and generation. The code and models will be released at https://github.com/FoundationVision/Liquid.
翻译:我们提出Liquid,一种自回归生成范式,通过将图像离散化为代码,并在视觉与语言共享的特征空间中学习这些代码嵌入与文本标记,实现了视觉理解与生成的无缝整合。与以往的多模态大语言模型(MLLM)不同,Liquid仅使用单一的大语言模型(LLM)完成这一整合,无需依赖CLIP等外部预训练的视觉嵌入。Liquid首次揭示了统一训练视觉与语言任务所导致的性能下降会随模型规模增大而减弱的缩放规律。此外,统一的标记空间使视觉生成与理解任务能够相互增强,有效消除了早期模型中常见的干扰。我们证明现有LLM可作为Liquid的强基础,节省100倍训练成本的同时,在多模态能力上超越Chameleon,并保持与LLAMA2等主流LLM相当的语言性能。Liquid在MJHQ-30K上取得5.47的FID分数,性能优于SD v2.1和SD-XL等模型,在视觉-语言任务和纯文本任务中均表现出色。本工作表明LLAMA3.2和GEMMA2等LLM是强大的多模态生成器,为增强视觉-语言理解与生成提供了可扩展的解决方案。代码与模型将在https://github.com/FoundationVision/Liquid发布。