Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data foster biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach. Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.
翻译:尽管视觉语言理解(VLU)基准(如VQA v2、OKVQA、A-OKVQA、GQA、VCR、SWAG和VisualCOMET)已被广泛采用,我们的分析揭示了一个影响其完整性的普遍问题:这些基准包含一些样本,其中答案依赖于所提供的上下文无法支撑的假设。在此类数据上训练模型会助长有偏学习和幻觉现象,因为模型倾向于做出类似的无根据假设。为解决该问题,我们为每个样本收集可用的上下文数据,并训练一个上下文选择模块以促进基于证据的模型预测。在多个基准上的显著改进证明了我们方法的有效性。此外,我们开发了一个通用型上下文感知弃权(CARA)检测器,用于识别缺乏足够上下文的样本,并通过在缺少所需上下文时弃权响应来提高模型准确性。CARA展现出对未训练过的新基准的泛化能力,凸显了其在未来VLU基准中检测或清理上下文不足样本的实用价值。最后,我们构建了一个上下文歧义性与充分性评估(CASE)数据集,用于评估上下文不足检测器的性能。总体而言,我们的工作代表了在确保视觉语言模型在复杂现实场景中生成可信且基于证据的输出方面的重要进展。