Large text-to-image diffusion models have impressive capabilities in generating photorealistic images from text prompts. How to effectively guide or control these powerful models to perform different downstream tasks becomes an important open problem. To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks. Unlike existing methods, OFT can provably preserve hyperspherical energy which characterizes the pairwise neuron relationship on the unit hypersphere. We find that this property is crucial for preserving the semantic generation ability of text-to-image diffusion models. To improve finetuning stability, we further propose Constrained Orthogonal Finetuning (COFT) which imposes an additional radius constraint to the hypersphere. Specifically, we consider two important finetuning text-to-image tasks: subject-driven generation where the goal is to generate subject-specific images given a few images of a subject and a text prompt, and controllable generation where the goal is to enable the model to take in additional control signals. We empirically show that our OFT framework outperforms existing methods in generation quality and convergence speed.
翻译:大型文本到图像扩散模型在从文本提示生成逼真图像方面具有令人印象深刻的能力。如何有效引导或控制这些强大模型以执行不同的下游任务成为一个重要的开放性问题。为解决这一挑战,我们引入了一种原则性的微调方法——正交微调(OFT),用于将文本到图像扩散模型适应于下游任务。与现有方法不同,OFT能够可证明地保留超球面能量,该能量表征单位超球面上成对神经元关系。我们发现这一特性对于保留文本到图像扩散模型的语义生成能力至关重要。为提高微调稳定性,我们进一步提出了约束正交微调(COFT),它在超球面上施加了额外的半径约束。具体而言,我们考虑了两项重要的文本到图像微调任务:主体驱动生成,其目标是在给定某个主体少量图像和文本提示的情况下生成主体特定图像;以及可控生成,其目标是使模型能够接收额外的控制信号。我们通过实验证明,我们的OFT框架在生成质量和收敛速度方面优于现有方法。