Unified multimodal generation architectures that jointly produce text and images have recently emerged as a promising direction for text-to-image (T2I) synthesis. However, many existing systems rely on explicit modality switching, generating reasoning text before switching manually to image generation. This separate, sequential inference process limits cross-modal coupling and prohibits automatic multimodal generation. This work explores post-training to achieve fully unified text-image generation, where models autonomously transition from textual reasoning to visual synthesis within a single inference process. We examine the impact of joint text-image generation on T2I performance and the relative importance of each modality during post-training. We additionally explore different post-training data strategies, showing that a targeted dataset addressing specific limitations achieves superior results compared to broad image-caption corpora or benchmark-aligned data. Using offline, reward-weighted post-training with fully self-generated synthetic data, our approach enables improvements in multimodal image generation across four diverse T2I benchmarks, demonstrating the effectiveness of reward-weighting both modalities and strategically designed post-training data.
翻译:统一多模态生成架构作为文本到图像(T2I)合成的一个有前景方向,近期已崭露头角,能够联合生成文本和图像。然而,许多现有系统依赖于显式的模态切换,即在手动切换到图像生成之前先生成推理文本。这种分离的、顺序的推理过程限制了跨模态耦合,并阻碍了自动化的多模态生成。本研究探索通过后训练实现完全统一的文本-图像生成,使模型能够在单一推理过程中自主地从文本推理过渡到视觉合成。我们研究了联合文本-图像生成对T2I性能的影响,以及后训练过程中各模态的相对重要性。此外,我们还探索了不同的后训练数据策略,结果表明,针对特定局限性的定向数据集相比宽泛的图像-描述语料库或与基准对齐的数据能取得更优的结果。通过使用完全自生成的合成数据进行离线、奖励加权的后训练,我们的方法在四个不同的T2I基准测试中实现了多模态图像生成的全面改进,这证明了在两种模态上应用奖励加权以及采用策略性设计的后训练数据的有效性。