Low-Rank Adaptation (LoRA) and other parameter-efficient fine-tuning (PEFT) methods provide low-memory, storage-efficient solutions for personalizing text-to-image models. However, these methods offer little to no improvement in wall-clock training time or the number of steps needed for convergence compared to full model fine-tuning. While PEFT methods assume that shifts in generated distributions (from base to fine-tuned models) can be effectively modeled through weight changes in a low-rank subspace, they fail to leverage knowledge of common use cases, which typically focus on capturing specific styles or identities. Observing that desired outputs often comprise only a small subset of the possible domain covered by LoRA training, we propose reducing the search space by incorporating a prior over regions of interest. We demonstrate that training a hypernetwork model to generate LoRA weights can achieve competitive quality for specific domains while enabling near-instantaneous conditioning on user input, in contrast to traditional training methods that require thousands of steps.
翻译:低秩自适应(LoRA)及其他参数高效微调方法为文本到图像模型的个性化提供了低内存、低存储成本的解决方案。然而,与全模型微调相比,这些方法在训练所需挂钟时间或收敛步数方面几乎没有改进。尽管参数高效微调方法假设生成分布的偏移(从基础模型到微调模型)可通过低秩子空间中的权重变化有效建模,但它们未能利用常见用例的知识——这些用例通常侧重于捕捉特定风格或身份。我们观察到,期望输出往往仅涵盖LoRA训练可能域的一小部分子集,因此提出通过引入对感兴趣区域的先验来缩减搜索空间。实验表明,训练超网络模型以生成LoRA权重可在特定领域达到具有竞争力的生成质量,同时实现近乎即时响应用户输入的条件化生成;相比之下,传统训练方法需要数千步优化过程。