Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data foster biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach. Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.
翻译:尽管视觉语言理解(VLU)基准数据集如VQA v2、OKVQA、A-OKVQA、GQA、VCR、SWAG和VisualCOMET已被广泛采用,我们的分析揭示了一个影响其完整性的普遍问题:这些基准数据集中包含部分样本,其答案依赖于现有上下文无法支持的假设。在此类数据上训练模型会助长偏见学习和幻觉生成,因为模型倾向于做出类似的无依据假设。为解决此问题,我们尽可能为每个样本收集上下文数据,并训练一个上下文选择模块以促进基于证据的模型预测。在多个基准数据集上取得的显著改进证明了我们方法的有效性。进一步地,我们开发了一种通用型上下文感知弃权(CARA)检测器,用于识别缺乏充分上下文的样本,并在所需上下文缺失时通过弃权响应来提升模型准确性。CARA展现出对未训练过的新基准数据集的泛化能力,这凸显了其在未来VLU基准数据集中检测或清理上下文不足样本的实用价值。最后,我们构建了一个上下文模糊性与充分性评估(CASE)数据集,用于评估上下文不足检测器的性能。总体而言,我们的工作代表了在确保视觉语言模型于复杂现实场景中生成可信且基于证据的输出方面取得的重大进展。