Recent advances in text-to-image customization have enabled high-fidelity, context-rich generation of personalized images, allowing specific concepts to appear in a variety of scenarios. However, current methods struggle with combining multiple personalized models, often leading to attribute entanglement or requiring separate training to preserve concept distinctiveness. We present LoRACLR, a novel approach for multi-concept image generation that merges multiple LoRA models, each fine-tuned for a distinct concept, into a single, unified model without additional individual fine-tuning. LoRACLR uses a contrastive objective to align and merge the weight spaces of these models, ensuring compatibility while minimizing interference. By enforcing distinct yet cohesive representations for each concept, LoRACLR enables efficient, scalable model composition for high-quality, multi-concept image synthesis. Our results highlight the effectiveness of LoRACLR in accurately merging multiple concepts, advancing the capabilities of personalized image generation.
翻译:近年来,文本到图像定制化技术的进展使得个性化图像能够实现高保真度、上下文丰富的生成,允许特定概念出现在多样化的场景中。然而,现有方法在组合多个个性化模型时面临困难,常常导致属性纠缠,或需要单独训练以保持概念的独特性。本文提出LoRACLR,一种用于多概念图像生成的新方法,它能够将多个针对不同概念分别微调的LoRA模型合并为一个统一的单一模型,而无需额外的单独微调。LoRACLR采用对比目标来对齐并融合这些模型的权重空间,在确保兼容性的同时最小化干扰。通过强制每个概念具有独特且连贯的表征,LoRACLR实现了高效、可扩展的模型组合,以支持高质量的多概念图像合成。我们的实验结果突显了LoRACLR在准确融合多个概念方面的有效性,推动了个性化图像生成能力的发展。