Text-based editing diffusion models exhibit limited performance when the user's input instruction is ambiguous. To solve this problem, we propose $\textit{Specify ANd Edit}$ (SANE), a zero-shot inference pipeline for diffusion-based editing systems. We use a large language model (LLM) to decompose the input instruction into specific instructions, i.e. well-defined interventions to apply to the input image to satisfy the user's request. We benefit from the LLM-derived instructions along the original one, thanks to a novel denoising guidance strategy specifically designed for the task. Our experiments with three baselines and on two datasets demonstrate the benefits of SANE in all setups. Moreover, our pipeline improves the interpretability of editing models, and boosts the output diversity. We also demonstrate that our approach can be applied to any edit, whether ambiguous or not. Our code is public at https://github.com/fabvio/SANE.
翻译:基于文本的编辑扩散模型在用户输入指令存在歧义时表现出有限的性能。为解决此问题,我们提出$\textit{Specify ANd Edit}$(SANE)——一种基于扩散编辑系统的零样本推理框架。我们利用大型语言模型(LLM)将输入指令分解为具体指令,即对输入图像进行明确定义的操作以满足用户需求。通过专门为此任务设计的新型去噪引导策略,我们能够同时利用LLM生成的指令与原始指令。我们在两个数据集上对三种基线模型进行的实验表明,SANE在所有设置中均能带来性能提升。此外,该框架提升了编辑模型的可解释性,并增强了输出结果的多样性。我们还证明该方法可适用于任意编辑任务,无论指令是否存在歧义。代码已开源:https://github.com/fabvio/SANE。