Diffusion models have revolted the field of text-to-image generation recently. The unique way of fusing text and image information contributes to their remarkable capability of generating highly text-related images. From another perspective, these generative models imply clues about the precise correlation between words and pixels. In this work, a simple but effective method is proposed to utilize the attention mechanism in the denoising network of text-to-image diffusion models. Without re-training nor inference-time optimization, the semantic grounding of phrases can be attained directly. We evaluate our method on Pascal VOC 2012 and Microsoft COCO 2014 under weakly-supervised semantic segmentation setting and our method achieves superior performance to prior methods. In addition, the acquired word-pixel correlation is found to be generalizable for the learned text embedding of customized generation methods, requiring only a few modifications. To validate our discovery, we introduce a new practical task called "personalized referring image segmentation" with a new dataset. Experiments in various situations demonstrate the advantages of our method compared to strong baselines on this task. In summary, our work reveals a novel way to extract the rich multi-modal knowledge hidden in diffusion models for segmentation.
翻译:扩散模型近期彻底改变了文本到图像生成领域。其融合文本与图像信息的独特方式,使其能够生成与文本高度相关的图像。从另一视角看,这些生成模型隐含着词语与像素间精确关联的线索。本研究提出一种简单而有效的方法,利用文本到图像扩散模型去噪网络中的注意力机制。该方法无需重新训练或推理时优化,即可直接实现短语的语义定位。我们在弱监督语义分割设定下,在Pascal VOC 2012和Microsoft COCO 2014数据集上评估了本方法,其性能优于现有方法。此外,研究发现所获取的词-像素关联可泛化至定制化生成方法学习到的文本嵌入,仅需少量修改即可适用。为验证这一发现,我们提出了名为"个性化指代图像分割"的新实际任务,并构建了相应数据集。多种场景下的实验表明,相较于强基线方法,本方法在此任务中具有显著优势。总之,本研究揭示了一种从扩散模型中提取隐藏的丰富多模态知识以用于分割任务的新途径。