Text-to-image diffusion models produce impressive results but are frustrating tools for artists who desire fine-grained control. For example, a common use case is to create images of a specific instance in novel contexts, i.e., "identity-preserving generation". This setting, along with many other tasks (e.g., relighting), is a natural fit for image+text-conditional generative models. However, there is insufficient high-quality paired data to train such a model directly. We propose Diffusion Self-Distillation, a method for using a pre-trained text-to-image model to generate its own dataset for text-conditioned image-to-image tasks. We first leverage a text-to-image diffusion model's in-context generation ability to create grids of images and curate a large paired dataset with the help of a Visual-Language Model. We then fine-tune the text-to-image model into a text+image-to-image model using the curated paired dataset. We demonstrate that Diffusion Self-Distillation outperforms existing zero-shot methods and is competitive with per-instance tuning techniques on a wide range of identity-preservation generation tasks, without requiring test-time optimization.
翻译:文本到图像扩散模型能生成令人印象深刻的结果,但对于希望获得精细控制的艺术家而言,这些模型却是令人沮丧的工具。例如,一个常见用例是在新颖的上下文中创建特定实例的图像,即“身份保持生成”。这一场景以及许多其他任务(如重光照)天然适合图像+文本条件生成模型。然而,缺乏足够的高质量配对数据来直接训练此类模型。我们提出了扩散自蒸馏,这是一种利用预训练的文本到图像模型为其自身的文本条件图像到图像任务生成数据集的方法。我们首先利用文本到图像扩散模型的上下文生成能力创建图像网格,并借助一个视觉语言模型来策划一个大型配对数据集。然后,我们使用策划的配对数据集将文本到图像模型微调为文本+图像到图像模型。我们证明,扩散自蒸馏在广泛的身份保持生成任务上优于现有的零样本方法,并与针对每个实例的调优技术相媲美,且无需进行测试时优化。