We introduce GrounDiT, a novel training-free spatial grounding technique for text-to-image generation using Diffusion Transformers (DiT). Spatial grounding with bounding boxes has gained attention for its simplicity and versatility, allowing for enhanced user control in image generation. However, prior training-free approaches often rely on updating the noisy image during the reverse diffusion process via backpropagation from custom loss functions, which frequently struggle to provide precise control over individual bounding boxes. In this work, we leverage the flexibility of the Transformer architecture, demonstrating that DiT can generate noisy patches corresponding to each bounding box, fully encoding the target object and allowing for fine-grained control over each region. Our approach builds on an intriguing property of DiT, which we refer to as semantic sharing. Due to semantic sharing, when a smaller patch is jointly denoised alongside a generatable-size image, the two become semantic clones. Each patch is denoised in its own branch of the generation process and then transplanted into the corresponding region of the original noisy image at each timestep, resulting in robust spatial grounding for each bounding box. In our experiments on the HRS and DrawBench benchmarks, we achieve state-of-the-art performance compared to previous training-free approaches.
翻译:我们提出GrounDiT,一种用于基于扩散Transformer(DiT)的文生图任务的、无需训练的新型空间定位技术。基于边界框的空间定位因其简洁性和通用性而受到关注,能够增强用户在图像生成中的控制能力。然而,现有的无需训练方法通常依赖于在反向扩散过程中通过自定义损失函数的反向传播来更新噪声图像,这类方法往往难以实现对单个边界框的精确控制。在本工作中,我们利用Transformer架构的灵活性,证明了DiT能够生成与每个边界框对应的噪声块,这些块完全编码了目标对象,并允许对每个区域进行细粒度控制。我们的方法基于DiT的一个有趣特性,我们称之为语义共享。由于语义共享,当较小的图像块与可生成尺寸的图像联合去噪时,两者会变成语义克隆体。每个块在生成过程的分支中进行独立去噪,然后在每个时间步移植到原始噪声图像的对应区域,从而为每个边界框实现鲁棒的空间定位。在HRS和DrawBench基准测试的实验中,与先前的无需训练方法相比,我们取得了最先进的性能。