Large Vision-Language Models (LVLMs) suffer from hallucination issues, wherein the models generate plausible-sounding but factually incorrect outputs, undermining their reliability. A comprehensive quantitative evaluation is necessary to identify and understand the extent of hallucinations in these models. However, existing benchmarks are often limited in scope, focusing mainly on object hallucinations. Furthermore, current evaluation methods struggle to effectively address the subtle semantic distinctions between model outputs and reference data, as well as the balance between hallucination and informativeness. To address these issues, we introduce a multi-dimensional benchmark covering objects, attributes, and relations, with challenging images selected based on associative biases. Moreover, we propose a large language model (LLM)-based two-stage evaluation framework that generalizes the popular CHAIR metric and incorporates both faithfulness and coverage into the evaluation. Experiments on 10 established LVLMs demonstrate that our evaluation metric is more comprehensive and better correlated with humans than existing work when evaluating on our challenging human-annotated benchmark dataset. Our work also highlights the critical balance between faithfulness and coverage of model outputs, and encourages future works to address hallucinations in LVLMs while keeping their outputs informative.
翻译:大型视觉语言模型(LVLMs)存在幻觉问题,即模型生成听起来合理但事实错误的输出,这削弱了其可靠性。需要进行全面的定量评估以识别和理解这些模型中幻觉的程度。然而,现有基准测试通常范围有限,主要关注物体幻觉。此外,当前的评估方法难以有效处理模型输出与参考数据之间细微的语义差异,以及幻觉与信息量之间的平衡。为解决这些问题,我们引入了一个涵盖物体、属性和关系的多维基准测试,并基于关联性偏见选取了具有挑战性的图像。此外,我们提出了一个基于大型语言模型(LLM)的两阶段评估框架,该框架推广了流行的CHAIR指标,并将忠实度与覆盖度同时纳入评估。在10个已建立的LVLMs上进行的实验表明,在我们具有挑战性的人工标注基准数据集上进行评估时,我们的评估指标比现有工作更全面,且与人类评估结果的相关性更好。我们的工作还强调了模型输出在忠实度与覆盖度之间的关键平衡,并鼓励未来的研究在解决LVLMs幻觉问题的同时,保持其输出的信息量。