Text-to-image diffusion models excel at generating photorealistic images, but commonly struggle to render accurate spatial relationships described in text prompts. We identify two core issues underlying this common failure: 1) the ambiguous nature of spatial-related data in existing datasets, and 2) the inability of current text encoders to accurately interpret the spatial semantics of input descriptions. We address these issues with CoMPaSS, a versatile training framework that enhances spatial understanding of any T2I diffusion model. CoMPaSS solves the ambiguity of spatial-related data with the Spatial Constraints-Oriented Pairing (SCOP) data engine, which curates spatially-accurate training data through a set of principled spatial constraints. To better exploit the curated high-quality spatial priors, CoMPaSS further introduces a Token ENcoding ORdering (TENOR) module to allow better exploitation of high-quality spatial priors, effectively compensating for the shortcoming of text encoders. Extensive experiments on four popular open-weight T2I diffusion models covering both UNet- and MMDiT-based architectures demonstrate the effectiveness of CoMPaSS by setting new state-of-the-arts with substantial relative gains across well-known benchmarks on spatial relationships generation, including VISOR (+98%), T2I-CompBench Spatial (+67%), and GenEval Position (+131%). Code will be available at https://github.com/blurgyy/CoMPaSS.
翻译:文本到图像扩散模型在生成逼真图像方面表现出色,但通常难以准确呈现文本提示中描述的空间关系。我们识别了导致这一常见失败的两个核心问题:1) 现有数据集中空间相关数据的模糊性,以及 2) 当前文本编码器无法准确解析输入描述的空间语义。我们通过 CoMPaSS 这一通用训练框架来解决这些问题,该框架可增强任何 T2I 扩散模型的空间理解能力。CoMPaSS 通过空间约束导向配对数据引擎解决空间相关数据的模糊性问题,该引擎通过一组原则性的空间约束来构建空间准确的训练数据。为了更好地利用所构建的高质量空间先验,CoMPaSS 进一步引入了令牌编码排序模块,以实现对高质量空间先验的更有效利用,从而弥补文本编码器的不足。在涵盖 UNet 和 MMDiT 架构的四种流行开源权重 T2I 扩散模型上进行的大量实验证明了 CoMPaSS 的有效性,其在空间关系生成的知名基准测试中均取得了新的最优性能,并实现了显著的相对提升,包括 VISOR (+98%)、T2I-CompBench Spatial (+67%) 和 GenEval Position (+131%)。代码将在 https://github.com/blurgyy/CoMPaSS 发布。