Text-guided diffusion models have revolutionized generative tasks by producing high-fidelity content from text descriptions. They have also enabled an editing paradigm where concepts can be replaced through text conditioning (e.g., a dog to a tiger). In this work, we explore a novel approach: instead of replacing a concept, can we enhance or suppress the concept itself? Through an empirical study, we identify a trend where concepts can be decomposed in text-guided diffusion models. Leveraging this insight, we introduce ScalingConcept, a simple yet effective method to scale decomposed concepts up or down in real input without introducing new elements. To systematically evaluate our approach, we present the WeakConcept-10 dataset, where concepts are imperfect and need to be enhanced. More importantly, ScalingConcept enables a variety of novel zero-shot applications across image and audio domains, including tasks such as canonical pose generation and generative sound highlighting or removal.
翻译:文本引导的扩散模型通过从文本描述生成高保真内容,彻底改变了生成式任务。它们还催生了一种编辑范式,即通过文本条件替换概念(例如,将狗替换为老虎)。在本工作中,我们探索了一种新颖的方法:不替换概念,我们能否增强或抑制概念本身?通过一项实证研究,我们发现在文本引导的扩散模型中,概念可以被分解。利用这一洞见,我们提出了ScalingConcept,一种简单而有效的方法,可在真实输入中向上或向下缩放已分解的概念,而无需引入新元素。为了系统评估我们的方法,我们提出了WeakConcept-10数据集,其中的概念不完美且需要增强。更重要的是,ScalingConcept使得在图像和音频领域实现多种新颖的零样本应用成为可能,包括规范姿态生成、生成式声音突出或移除等任务。