We introduce HallusionBench, a comprehensive benchmark designed for the evaluation of image-context reasoning. This benchmark presents significant challenges to advanced large visual-language models (LVLMs), such as GPT-4V(Vision), Gemini Pro Vision, Claude 3, and LLaVA-1.5, by emphasizing nuanced understanding and interpretation of visual data. The benchmark comprises 346 images paired with 1129 questions, all meticulously crafted by human experts. We introduce a novel structure for these visual questions designed to establish control groups. This structure enables us to conduct a quantitative analysis of the models' response tendencies, logical consistency, and various failure modes. In our evaluation on HallusionBench, we benchmarked 15 different models, highlighting a 31.42% question-pair accuracy achieved by the state-of-the-art GPT-4V. Notably, all other evaluated models achieve accuracy below 16%. Moreover, our analysis not only highlights the observed failure modes, including language hallucination and visual illusion, but also deepens an understanding of these pitfalls. Our comprehensive case studies within HallusionBench shed light on the challenges of hallucination and illusion in LVLMs. Based on these insights, we suggest potential pathways for their future improvement. The benchmark and codebase can be accessed at https://github.com/tianyi-lab/HallusionBench.
翻译:我们提出了HallusionBench,这是一个专为评估图像上下文推理能力而设计的综合性基准测试。该基准通过强调对视觉数据的细微理解与解读,对GPT-4V(Vision)、Gemini Pro Vision、Claude 3及LLaVA-1.5等先进大型视觉语言模型(LVLMs)提出了重大挑战。该基准包含346张图像及其对应的1129个问题,所有内容均由人类专家精心构建。我们为这些视觉问题引入了一种新型结构,旨在建立对照组。这一结构使我们能够对模型的响应倾向、逻辑一致性及各种故障模式进行定量分析。在对HallusionBench的评估中,我们对15种不同模型进行了基准测试,结果显示最先进的GPT-4V达到了31.42%的问题对准确率。值得注意的是,所有其他评估模型的准确率均低于16%。此外,我们的分析不仅指出了观察到的故障模式(包括语言幻觉与视觉错觉),还深化了对这些陷阱的理解。我们在HallusionBench中的综合案例研究揭示了LVLMs中幻觉与错觉的挑战。基于这些见解,我们为其未来改进提出了潜在路径。该基准及代码库可通过https://github.com/tianyi-lab/HallusionBench获取。