Recent advances in accelerating text-to-image (T2I) diffusion models have enabled the synthesis of high-fidelity images even in a single step. However, personalizing these models to incorporate novel concepts remains a challenge due to the limited capacity of one-step models to capture new concept distributions effectively. We propose a bidirectional concept distillation framework, EchoDistill, to enable one-step diffusion personalization (1-SDP). Our approach involves an end-to-end training process where a multi-step diffusion model (teacher) and a one-step diffusion model (student) are trained simultaneously. The concept is first distilled from the teacher model to the student, and then echoed back from the student to the teacher. During the EchoDistill, we share the text encoder between the two models to ensure consistent semantic understanding. Following this, the student model is optimized with adversarial losses to align with the real image distribution and with alignment losses to maintain consistency with the teacher's output. Furthermore, we introduce the bidirectional echoing refinement strategy, wherein the student model leverages its faster generation capability to feedback to the teacher model. This bidirectional concept distillation mechanism not only enhances the student ability to personalize novel concepts but also improves the generative quality of the teacher model. Our experiments demonstrate that this collaborative framework significantly outperforms existing personalization methods over the 1-SDP setup, establishing a novel paradigm for rapid and effective personalization in T2I diffusion models.
翻译:近期,加速文本到图像(T2I)扩散模型的研究进展使得即使在单步内也能合成高保真度图像。然而,由于一步模型有效捕获新概念分布的能力有限,将这些模型个性化以融入新概念仍然是一个挑战。我们提出了一个双向概念蒸馏框架EchoDistill,以实现一步扩散个性化(1-SDP)。我们的方法涉及一个端到端的训练过程,其中多步扩散模型(教师)和一步扩散模型(学生)被同时训练。概念首先从教师模型蒸馏到学生模型,然后从学生模型回响到教师模型。在EchoDistill过程中,我们在两个模型之间共享文本编码器,以确保一致的语义理解。随后,学生模型通过对抗损失进行优化以对齐真实图像分布,并通过对齐损失进行优化以保持与教师模型输出的一致性。此外,我们引入了双向回响精炼策略,其中学生模型利用其更快的生成能力向教师模型提供反馈。这种双向概念蒸馏机制不仅增强了学生模型个性化新概念的能力,也提高了教师模型的生成质量。我们的实验表明,这种协作框架在1-SDP设置下显著优于现有的个性化方法,为T2I扩散模型中的快速有效个性化建立了一个新颖的范式。