Existing text-to-image diffusion models primarily generate images from text prompts. However, the inherent conciseness of textual descriptions poses challenges in faithfully synthesizing images with intricate details, such as specific entities or scenes. This paper presents UNIMO-G, a simple multimodal conditional diffusion framework that operates on multimodal prompts with interleaved textual and visual inputs, which demonstrates a unified ability for both text-driven and subject-driven image generation. UNIMO-G comprises two core components: a Multimodal Large Language Model (MLLM) for encoding multimodal prompts, and a conditional denoising diffusion network for generating images based on the encoded multimodal input. We leverage a two-stage training strategy to effectively train the framework: firstly pre-training on large-scale text-image pairs to develop conditional image generation capabilities, and then instruction tuning with multimodal prompts to achieve unified image generation proficiency. A well-designed data processing pipeline involving language grounding and image segmentation is employed to construct multi-modal prompts. UNIMO-G excels in both text-to-image generation and zero-shot subject-driven synthesis, and is notably effective in generating high-fidelity images from complex multimodal prompts involving multiple image entities.
翻译:现有文本到图像的扩散模型主要根据文本提示生成图像。然而,文本描述固有的简洁性使得在忠实合成具有复杂细节(如特定实体或场景)的图像时面临挑战。本文提出UNIMO-G,一种简单的多模态条件扩散框架,该框架处理包含交错文本和视觉输入的多模态提示,展示了在文本驱动和主体驱动图像生成方面的统一能力。UNIMO-G包含两个核心组件:用于编码多模态提示的多模态大语言模型(MLLM),以及基于编码后的多模态输入生成图像的条件去噪扩散网络。我们采用两阶段训练策略有效训练该框架:首先在大规模文本-图像对上进行预训练以发展条件图像生成能力,随后通过多模态提示进行指令微调以实现统一的图像生成能力。利用精心设计的数据处理流程(涉及语言定位和图像分割)构建多模态提示。UNIMO-G在文本到图像生成和零样本主体驱动合成方面均表现出色,并且在处理涉及多个图像实体的复杂多模态提示时,能高效生成高保真图像。