The emergence of various adapters, including Low-Rank Adaptation (LoRA) applied from the field of natural language processing, has allowed diffusion models to personalize image generation at a low cost. However, due to the various challenges including limited datasets and shortage of regularization and computation resources, adapter training often results in unsatisfactory outcomes, leading to the corruption of the backbone model's prior knowledge. One of the well known phenomena is the loss of diversity in object generation, especially within the same class which leads to generating almost identical objects with minor variations. This poses challenges in generation capabilities. To solve this issue, we present Contrastive Adapter Training (CAT), a simple yet effective strategy to enhance adapter training through the application of CAT loss. Our approach facilitates the preservation of the base model's original knowledge when the model initiates adapters. Furthermore, we introduce the Knowledge Preservation Score (KPS) to evaluate CAT's ability to keep the former information. We qualitatively and quantitatively compare CAT's improvement. Finally, we mention the possibility of CAT in the aspects of multi-concept adapter and optimization.
翻译:随着各类适配器(包括从自然语言处理领域引入的低秩自适应技术)的出现,扩散模型得以以较低成本实现个性化图像生成。然而,由于训练数据有限、正则化不足以及计算资源短缺等多重挑战,适配器训练往往产生欠佳结果,导致骨干模型先验知识受损。其中一个典型现象是生成对象多样性的丧失,特别是在同一类别内往往生成仅存在细微差异的几乎完全相同的对象,这对生成能力构成了显著挑战。为解决该问题,我们提出对比适配器训练方法——一种通过引入对比适配器训练损失函数来增强适配器训练的简洁而有效的策略。该方法能够在模型启动适配器时有效保持基础模型的原始知识。此外,我们提出知识保持分数这一量化指标,用以评估对比适配器训练方法对原有信息的保留能力。我们通过定性与定量实验对比验证了对比适配器训练方法的改进效果。最后,我们探讨了该方法在多概念适配器优化方面的潜在应用前景。