Text-based adversarial guidance using a negative prompt has emerged as a widely adopted approach to steer diffusion models away from producing undesired concepts. While useful, performing adversarial guidance using text alone can be insufficient to capture complex visual concepts or avoid specific visual elements like copyrighted characters. In this paper, for the first time we explore an alternate modality in this direction by performing adversarial guidance directly using visual features from a reference image or other images in a batch. We introduce negative token merging (NegToMe), a simple but effective training-free approach which performs adversarial guidance through images by selectively pushing apart matching visual features between reference and generated images during the reverse diffusion process. By simply adjusting the used reference, NegToMe enables a diverse range of applications. Notably, when using other images in same batch as reference, we find that NegToMe significantly enhances output diversity (e.g., racial, gender, visual) by guiding features of each image away from others. Similarly, when used w.r.t. copyrighted reference images, NegToMe reduces visual similarity to copyrighted content by 34.57%. NegToMe is simple to implement using just few-lines of code, uses only marginally higher (<4%) inference time and is compatible with different diffusion architectures, including those like Flux, which don't natively support the use of a negative prompt. Code is available at https://negtome.github.io
翻译:基于文本的对抗性引导通过使用负向提示词已成为一种广泛采用的方法,用于引导扩散模型远离生成不期望的概念。尽管有用,但仅使用文本进行对抗性引导可能不足以捕捉复杂的视觉概念或避免特定的视觉元素(如受版权保护的角色)。在本文中,我们首次探索了这一方向的替代模态,即直接使用参考图像或批次中其他图像的视觉特征进行对抗性引导。我们提出了负向令牌合并(NegToMe),这是一种简单但有效的免训练方法,通过在反向扩散过程中选择性地推开参考图像与生成图像之间匹配的视觉特征,实现基于图像的对抗性引导。通过简单地调整使用的参考图像,NegToMe能够支持多样化的应用。值得注意的是,当使用同一批次中的其他图像作为参考时,我们发现NegToMe通过引导每张图像的特征远离其他图像,显著增强了输出多样性(例如,种族、性别、视觉方面)。类似地,当用于受版权保护的参考图像时,NegToMe将视觉相似度降低了34.57%。NegToMe实现简单,仅需几行代码,推理时间仅略微增加(<4%),并且兼容不同的扩散架构,包括像Flux这样本身不支持使用负向提示词的模型。代码可在https://negtome.github.io获取。