Significant achievements in personalization of diffusion models have been witnessed. Conventional tuning-free methods mostly encode multiple reference images by averaging their image embeddings as the injection condition, but such an image-independent operation cannot perform interaction among images to capture consistent visual elements within multiple references. Although the tuning-based Low-Rank Adaptation (LoRA) can effectively extract consistent elements within multiple images through the training process, it necessitates specific finetuning for each distinct image group. This paper introduces EasyRef, a novel plug-and-play adaptation method that enables diffusion models to be conditioned on multiple reference images and the text prompt. To effectively exploit consistent visual elements within multiple images, we leverage the multi-image comprehension and instruction-following capabilities of the multimodal large language model (MLLM), prompting it to capture consistent visual elements based on the instruction. Besides, injecting the MLLM's representations into the diffusion process through adapters can easily generalize to unseen domains, mining the consistent visual elements within unseen data. To mitigate computational costs and enhance fine-grained detail preservation, we introduce an efficient reference aggregation strategy and a progressive training scheme. Finally, we introduce MRBench, a new multi-reference image generation benchmark. Experimental results demonstrate EasyRef surpasses both tuning-free methods like IP-Adapter and tuning-based methods like LoRA, achieving superior aesthetic quality and robust zero-shot generalization across diverse domains.
翻译:扩散模型的个性化定制已取得显著成就。传统的免调优方法大多通过平均多张参考图像的嵌入向量作为注入条件,但此类与图像无关的操作无法实现图像间的交互以捕捉多张参考图像中一致的视觉元素。尽管基于调优的低秩自适应(LoRA)方法能通过训练过程有效提取多张图像中的一致元素,但需为每个不同的图像组进行特定微调。本文提出EasyRef,一种新颖的即插即用自适应方法,使扩散模型能够以多张参考图像和文本提示为条件。为有效利用多张图像中的一致视觉元素,我们借助多模态大语言模型(MLLM)的多图像理解与指令跟随能力,通过指令提示其捕捉一致视觉元素。此外,通过适配器将MLLM的表征注入扩散过程,可轻松泛化至未见领域,挖掘未见数据中的一致视觉元素。为降低计算成本并增强细粒度细节保留,我们提出高效的参考聚合策略与渐进式训练方案。最后,我们构建了MRBench——一个全新的多参考图像生成基准测试集。实验结果表明,EasyRef在超越IP-Adapter等免调优方法的同时,也优于LoRA等基于调优的方法,在多样化领域中实现了卓越的美学质量与鲁棒的零样本泛化能力。