Despite the widespread adoption of Vision-Language Understanding (VLU) benchmarks such as VQA v2, OKVQA, A-OKVQA, GQA, VCR, SWAG, and VisualCOMET, our analysis reveals a pervasive issue affecting their integrity: these benchmarks contain samples where answers rely on assumptions unsupported by the provided context. Training models on such data foster biased learning and hallucinations as models tend to make similar unwarranted assumptions. To address this issue, we collect contextual data for each sample whenever available and train a context selection module to facilitate evidence-based model predictions. Strong improvements across multiple benchmarks demonstrate the effectiveness of our approach. Further, we develop a general-purpose Context-AwaRe Abstention (CARA) detector to identify samples lacking sufficient context and enhance model accuracy by abstaining from responding if the required context is absent. CARA exhibits generalization to new benchmarks it wasn't trained on, underscoring its utility for future VLU benchmarks in detecting or cleaning samples with inadequate context. Finally, we curate a Context Ambiguity and Sufficiency Evaluation (CASE) set to benchmark the performance of insufficient context detectors. Overall, our work represents a significant advancement in ensuring that vision-language models generate trustworthy and evidence-based outputs in complex real-world scenarios.
翻译:尽管视觉语言理解(VLU)基准数据集如VQA v2、OKVQA、A-OKVQA、GQA、VCR、SWAG和VisualCOMET已被广泛采用,我们的分析揭示了一个影响其完整性的普遍问题:这些基准数据集中包含部分样本,其答案依赖于所提供上下文无法支持的假设。在此类数据上训练模型会导致有偏学习和幻觉现象,因为模型倾向于做出类似的无根据假设。为解决此问题,我们尽可能收集每个样本的上下文数据,并训练一个上下文选择模块以促进基于证据的模型预测。在多个基准数据集上取得的显著改进证明了我们方法的有效性。进一步,我们开发了一种通用型上下文感知弃权(CARA)检测器,用于识别缺乏足够上下文的样本,并通过在所需上下文缺失时放弃响应来提升模型准确性。CARA展现出对未训练过的新基准数据集的泛化能力,这凸显了其在未来VLU基准数据集中检测或清理上下文不足样本的实用性。最后,我们构建了一个上下文模糊性与充分性评估(CASE)集,用以评估上下文不足检测器的性能。总体而言,我们的工作代表了在确保视觉语言模型在复杂现实场景中生成可信且基于证据的输出方面的重要进展。