Compositional reasoning in Vision-Language Models (VLMs) remains challenging as these models often struggle to relate objects, attributes, and spatial relationships. Recent methods aim to address these limitations by relying on the semantics of the textual description, using Large Language Models (LLMs) to break them down into subsets of questions and answers. However, these methods primarily operate on the surface level, failing to incorporate deeper lexical understanding while introducing incorrect assumptions generated by the LLM. In response to these issues, we present Caption Expansion with Contradictions and Entailments (CECE), a principled approach that leverages Natural Language Inference (NLI) to generate entailments and contradictions from a given premise. CECE produces lexically diverse sentences while maintaining their core meaning. Through extensive experiments, we show that CECE enhances interpretability and reduces overreliance on biased or superficial features. By balancing CECE along the original premise, we achieve significant improvements over previous methods without requiring additional fine-tuning, producing state-of-the-art results on benchmarks that score agreement with human judgments for image-text alignment, and achieving an increase in performance on Winoground of +19.2% (group score) and +12.9% on EqBen (group score) over the best prior work (finetuned with targeted data).
翻译:视觉语言模型(VLMs)中的组合推理仍然具有挑战性,因为这些模型常常难以关联物体、属性和空间关系。近期方法试图通过依赖文本描述的语义,利用大语言模型(LLMs)将其分解为子问题与答案来应对这些局限。然而,这些方法主要在表层运作,未能融入更深层的词汇理解,同时引入了LLM生成的错误假设。针对这些问题,我们提出了基于蕴含与矛盾的描述扩展(CECE),这是一种利用自然语言推理(NLI)从给定前提生成蕴含句与矛盾句的原则性方法。CECE能生成词汇多样化的句子,同时保持其核心含义。通过大量实验,我们证明CECE增强了可解释性,并减少了对有偏或表层特征的过度依赖。通过将CECE与原前提进行平衡,我们在无需额外微调的情况下,相比先前方法取得了显著提升,在评估图文对齐与人类判断一致性的基准测试中取得了最先进的结果,并在Winoground上实现了+19.2%(组得分)的性能提升,在EqBen上实现了+12.9%(组得分)的提升,优于先前最佳工作(使用针对性数据微调)。