Unified generative models have shown remarkable performance in text and image generation. For image synthesis tasks, they adopt straightforward text-to-image (T2I) generation. However, direct T2I generation limits the models in handling complex compositional instructions, which frequently occur in real-world scenarios. Although this issue is vital, existing works mainly focus on improving the basic image generation capability of the models. While such improvements help to some extent, they still fail to adequately resolve the problem. Inspired by Chain of Thought (CoT) solving complex problems step by step, this work aims to introduce CoT into unified generative models to address the challenges of complex image generation that direct T2I generation cannot effectively solve, thereby endowing models with enhanced image generation ability. To achieve this, we first propose Functionality-oriented eXperts (FoXperts), an expert-parallel architecture in our model FoX, which assigns experts by function. FoXperts disentangles potential conflicts in mainstream modality-oriented designs and provides a solid foundation for CoT. When introducing CoT, the first question is how to design it for complex image generation. To this end, we emulate a human-like artistic workflow -- planning, acting, reflection, and correction -- and propose the Multimodal Chain of Thought (MCoT) approach, as the data involves both text and image. To address the subsequent challenge -- designing an effective MCoT training paradigm -- we develop a multi-task joint training scheme that equips the model with all capabilities required for each MCoT step in a disentangled manner. This paradigm avoids the difficulty of collecting consistent multi-step data tuples. Extensive experiments show that FoX consistently outperforms existing unified models on various T2I benchmarks, delivering notable improvements in complex image generation.
翻译:统一生成模型在文本与图像生成方面已展现出卓越性能。在图像合成任务中,它们通常采用直接的文本到图像(T2I)生成方式。然而,直接的T2I生成限制了模型处理复杂组合指令的能力,而这在现实场景中频繁出现。尽管此问题至关重要,现有工作主要集中于提升模型的基础图像生成能力。虽然此类改进在一定程度上有所帮助,但仍未能充分解决该问题。受思维链(CoT)逐步解决复杂问题的启发,本研究旨在将CoT引入统一生成模型,以应对直接T2I生成无法有效解决的复杂图像生成挑战,从而赋予模型增强的图像生成能力。为实现这一目标,我们首先提出功能导向专家(FoXperts),这是在我们模型FoX中采用的一种专家并行架构,该架构按功能分配专家。FoXperts解耦了主流模态导向设计中的潜在冲突,并为CoT奠定了坚实基础。在引入CoT时,首要问题是如何为复杂图像生成设计CoT。为此,我们模拟了类人的艺术创作流程——规划、执行、反思与修正——并提出了多模态思维链(MCoT)方法,因为数据同时涉及文本和图像。针对后续挑战——设计有效的MCoT训练范式——我们开发了一种多任务联合训练方案,以解耦的方式使模型具备MCoT每个步骤所需的所有能力。该范式避免了收集一致的多步数据元组的困难。大量实验表明,FoX在各种T2I基准测试中持续优于现有统一模型,在复杂图像生成方面实现了显著提升。