Large text-to-image models have revolutionized the ability to generate imagery using natural language. However, particularly unique or personal visual concepts, such as pets and furniture, will not be captured by the original model. This has led to interest in how to personalize a text-to-image model. Despite significant progress, this task remains a formidable challenge, particularly in preserving the subject's identity. Most researchers attempt to address this issue by modifying model architectures. These methods are capable of keeping the subject structure and color but fail to preserve identity details. Towards this issue, our approach takes a data-centric perspective. We introduce a novel regularization dataset generation strategy on both the text and image level. This strategy enables the model to preserve fine details of the desired subjects, such as text and logos. Our method is architecture-agnostic and can be flexibly applied on various text-to-image models. We show on established benchmarks that our data-centric approach forms the new state of the art in terms of identity preservation and text alignment.
翻译:大型文本到图像模型已彻底革新了利用自然语言生成图像的能力。然而,原始模型无法捕捉到某些独特或个性化的视觉概念,例如宠物和家具。这引发了如何对文本到图像模型进行个性化的研究兴趣。尽管已取得显著进展,该任务仍面临巨大挑战,特别是在保持主体身份一致性方面。大多数研究者试图通过修改模型架构来解决此问题。这些方法虽能保持主体结构和颜色,却无法保留身份细节特征。针对该问题,本文提出一种以数据为中心的解决视角。我们在文本和图像层面引入了一种新颖的正则化数据集生成策略。该策略使模型能够保留目标主体的精细细节特征,例如文本和徽标。我们的方法与模型架构无关,可灵活应用于各类文本到图像模型。我们在现有基准测试中证明,这种以数据为中心的方法在身份保持和文本对齐方面达到了新的最优水平。