Recent advancements in Unified Multimodal Models (UMMs) have significantly advanced text-to-image (T2I) generation, particularly through the integration of Chain-of-Thought (CoT) reasoning. However, existing CoT-based T2I methods largely rely on abstract natural-language planning, which lacks the precision required for complex spatial layouts, structured visual elements, and dense textual content. In this work, we propose CoCo (Code-as-CoT), a code-driven reasoning framework that represents the reasoning process as executable code, enabling explicit and verifiable intermediate planning for image generation. Given a text prompt, CoCo first generates executable code that specifies the structural layout of the scene, which is then executed in a sandboxed environment to render a deterministic draft image. The model subsequently refines this draft through fine-grained image editing to produce the final high-fidelity result. To support this training paradigm, we construct CoCo-10K, a curated dataset containing structured draft-final image pairs designed to teach both structured draft construction and corrective visual refinement. Empirical evaluations on StructT2IBench, OneIG-Bench, and LongText-Bench show that CoCo achieves improvements of +68.83%, +54.8%, and +41.23% over direct generation, while also outperforming other generation methods empowered by CoT. These results demonstrate that executable code is an effective and reliable reasoning paradigm for precise, controllable, and structured text-to-image generation. The code is available at: https://github.com/micky-li-hd/CoCo
翻译:近年来,统一多模态模型(UMMs)的发展显著推动了文本到图像(T2I)生成技术的进步,特别是通过引入链式思维(CoT)推理机制。然而,现有基于CoT的T2I方法主要依赖于抽象的自然语言规划,缺乏对复杂空间布局、结构化视觉元素以及密集文本内容所需的精确控制。本研究提出CoCo(代码即链式思维),一种代码驱动的推理框架,将推理过程表示为可执行代码,从而为图像生成提供显式且可验证的中间规划。给定文本提示,CoCo首先生成可执行代码以指定场景的结构化布局,随后在沙箱环境中执行该代码以渲染出确定性草图图像。模型接着通过细粒度图像编辑对该草图进行优化,最终生成高保真度的结果。为支持此训练范式,我们构建了CoCo-10K数据集,这是一个包含结构化草图-最终图像对的精选数据集,旨在同时教授结构化草图构建与视觉校正优化。在StructT2IBench、OneIG-Bench和LongText-Bench上的实证评估表明,CoCo相比直接生成方法分别实现了+68.83%、+54.8%和+41.23%的性能提升,同时优于其他基于CoT的生成方法。这些结果证明,可执行代码是一种有效且可靠的推理范式,能够实现精确、可控且结构化的文本到图像生成。代码已开源:https://github.com/micky-li-hd/CoCo