Vision-Language Models (VLMs) struggle with negation. Given a prompt like "retrieve (or generate) a street scene without pedestrians," they often fail to respect the "not." Existing methods address this limitation by fine-tuning on large negation datasets, but such retraining often compromises the model's zero-shot performance on affirmative prompts. We show that the embedding space of VLMs, such as CLIP, can be divided into semantically consistent subspaces. Based on this property, we propose a training-free framework that models negation as a subspace in the joint embedding space rather than a single point (Figure 1). To find the matching image for a caption such as "A but not N," we construct two spherical caps around the embeddings of A and N, and we score images by the central direction of the region that is close to A and far from N. Across retrieval, MCQ, and text-to-image tasks, our method improves negation understanding by about 30% on average over prior methods. It closes the gap between affirmative and negated prompts while preserving the zero-shot performance that fine-tuned models fail to maintain. Code will be released upon publication.
翻译:视觉语言模型(VLMs)在处理否定表达时存在困难。当给定如“检索(或生成)一个没有行人的街景”这样的提示时,它们往往无法正确遵循“没有”这一否定条件。现有方法通过在大规模否定数据集上进行微调来应对这一局限,但此类重新训练通常会损害模型在肯定性提示上的零样本性能。我们证明,如CLIP等VLMs的嵌入空间可被划分为语义一致的子空间。基于这一特性,我们提出一种无需训练的方法框架,将否定建模为联合嵌入空间中的一个子空间,而非单个点(图1)。为匹配如“A但不含N”这类描述的图像,我们在A和N的嵌入周围构建两个球冠区域,并通过靠近A且远离N的区域中心方向对图像进行评分。在检索、多项选择题和文本到图像生成任务中,我们的方法将否定理解能力平均提升了约30%,优于先前方法。它在缩小肯定性与否定性提示之间性能差距的同时,保持了微调模型难以维持的零样本性能。代码将在论文发表时开源。