Subject-driven text-to-image diffusion models empower users to tailor the model to new concepts absent in the pre-training dataset using a few sample images. However, prevalent subject-driven models primarily rely on single-concept input images, facing challenges in specifying the target concept when dealing with multi-concept input images. To this end, we introduce a textual localized text-to-image model (Texual Localization) to handle multi-concept input images. During fine-tuning, our method incorporates a novel cross-attention guidance to decompose multiple concepts, establishing distinct connections between the visual representation of the target concept and the identifier token in the text prompt. Experimental results reveal that our method outperforms or performs comparably to the baseline models in terms of image fidelity and image-text alignment on multi-concept input images. In comparison to Custom Diffusion, our method with hard guidance achieves CLIP-I scores that are 7.04%, 8.13% higher and CLIP-T scores that are 2.22%, 5.85% higher in single-concept and multi-concept generation, respectively. Notably, our method generates cross-attention maps consistent with the target concept in the generated images, a capability absent in existing models.
翻译:主题驱动的文本到图像扩散模型使用户能够通过少量样本图像,将模型适配到预训练数据集中不存在的新概念。然而,当前主流的主题驱动模型主要依赖单概念输入图像,在处理多概念输入图像时难以准确指定目标概念。为此,我们提出一种文本定位的文本到图像模型(Texual Localization)来处理多概念输入图像。在微调过程中,我们的方法引入了一种新颖的交叉注意力引导机制来分解多个概念,建立目标概念的视觉表示与文本提示中的标识符令牌之间的独特连接。实验结果表明,在多概念输入图像上,我们的方法在图像保真度和图文对齐方面优于或与基线模型性能相当。与Custom Diffusion相比,采用硬引导的我们的方法在单概念和多概念生成中,CLIP-I分数分别提升7.04%和8.13%,CLIP-T分数分别提升2.22%和5.85%。值得注意的是,我们的方法能够在生成图像中产生与目标概念一致的交叉注意力图,这是现有模型所不具备的能力。