Methods for finetuning generative models for concept-driven personalization generally achieve strong results for subject-driven or style-driven generation. Recently, low-rank adaptations (LoRA) have been proposed as a parameter-efficient way of achieving concept-driven personalization. While recent work explores the combination of separate LoRAs to achieve joint generation of learned styles and subjects, existing techniques do not reliably address the problem; they often compromise either subject fidelity or style fidelity. We propose ZipLoRA, a method to cheaply and effectively merge independently trained style and subject LoRAs in order to achieve generation of any user-provided subject in any user-provided style. Experiments on a wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize. Project page: https://ziplora.github.io
翻译:针对概念驱动个性化任务的生成模型微调方法通常在主体驱动或风格驱动的生成任务上取得优异效果。近期,低秩自适应(LoRA)被提出作为一种参数高效的概念驱动个性化实现方案。虽然现有研究尝试通过组合独立训练的LoRA来实现学习风格与主体的联合生成,但现有技术未能可靠解决该问题,往往在主体保真度或风格保真度上存在妥协。本文提出ZipLoRA方法,通过低成本且高效地融合独立训练的风格与主体LoRA,实现用户指定主体与任意用户指定风格的组合生成。在广泛的主体与风格组合实验表明,ZipLoRA能够生成具有说服力的结果,在主体与风格保真度上较基线方法获得显著提升,同时保持场景重构能力。项目主页:https://ziplora.github.io