Text-to-image (T2I) diffusion models have revolutionized generative modeling by producing high-fidelity, diverse, and visually realistic images from textual prompts. Despite these advances, existing models struggle with complex prompts involving multiple objects and attributes, often misaligning modifiers with their corresponding nouns or neglecting certain elements. Recent attention-based methods have improved object inclusion and linguistic binding, but still face challenges such as attribute misbinding and a lack of robust generalization guarantees. Leveraging the PAC-Bayes framework, we propose a Bayesian approach that designs custom priors over attention distributions to enforce desirable properties, including divergence between objects, alignment between modifiers and their corresponding nouns, minimal attention to irrelevant tokens, and regularization for better generalization. Our approach treats the attention mechanism as an interpretable component, enabling fine-grained control and improved attribute-object alignment. We demonstrate the effectiveness of our method on standard benchmarks, achieving state-of-the-art results across multiple metrics. By integrating custom priors into the denoising process, our method enhances image quality and addresses long-standing challenges in T2I diffusion models, paving the way for more reliable and interpretable generative models.
翻译:文本到图像(T2I)扩散模型通过从文本提示生成高保真、多样化且视觉逼真的图像,彻底改变了生成式建模。尽管取得了这些进展,现有模型在处理涉及多个对象和属性的复杂提示时仍面临困难,常常出现修饰语与对应名词错位或忽略某些元素的问题。最近的基于注意力的方法在对象包含和语言绑定方面有所改进,但仍面临属性错位和缺乏鲁棒泛化保证等挑战。利用PAC-Bayes框架,我们提出了一种贝叶斯方法,该方法通过设计注意力分布上的自定义先验来强制执行期望的属性,包括对象间的差异性、修饰语与其对应名词的对齐、对无关标记的最小关注以及用于更好泛化的正则化。我们的方法将注意力机制视为一个可解释的组件,实现了细粒度控制和改进的属性-对象对齐。我们在标准基准测试上证明了我们方法的有效性,在多个指标上取得了最先进的结果。通过将自定义先验整合到去噪过程中,我们的方法提升了图像质量,并解决了T2I扩散模型中长期存在的挑战,为开发更可靠和可解释的生成模型铺平了道路。