Hallucination poses a persistent challenge for multimodal large language models (MLLMs). However, existing benchmarks for evaluating hallucinations are generally static, which may overlook the potential risk of data contamination. To address this issue, we propose ODE, an open-set, dynamic protocol designed to evaluate object hallucinations in MLLMs at both the existence and attribute levels. ODE employs a graph-based structure to represent real-world object concepts, their attributes, and the distributional associations between them. This structure facilitates the extraction of concept combinations based on diverse distributional criteria, generating varied samples for structured queries that evaluate hallucinations in both generative and discriminative tasks. Through the generation of new samples, dynamic concept combinations, and varied distribution frequencies, ODE mitigates the risk of data contamination and broadens the scope of evaluation. This protocol is applicable to both general and specialized scenarios, including those with limited data. Experimental results demonstrate the effectiveness of our protocol, revealing that MLLMs exhibit higher hallucination rates when evaluated with ODE-generated samples, which indicates potential data contamination. Furthermore, these generated samples aid in analyzing hallucination patterns and fine-tuning models, offering an effective approach to mitigating hallucinations in MLLMs.
翻译:幻觉是多模态大语言模型(MLLMs)面临的一个持续挑战。然而,现有的幻觉评估基准通常是静态的,这可能忽略了数据污染的潜在风险。为解决此问题,我们提出了ODE,一种开放集、动态的评估协议,旨在从存在性和属性两个层面评估MLLMs中的物体幻觉。ODE采用基于图的结构来表示现实世界中的物体概念、其属性以及它们之间的分布关联。该结构便于根据不同的分布标准提取概念组合,从而为结构化查询生成多样化的样本,以评估生成式和判别式任务中的幻觉。通过生成新样本、动态概念组合以及变化的分布频率,ODE降低了数据污染的风险并拓宽了评估范围。该协议适用于通用场景和专用场景,包括数据有限的场景。实验结果证明了我们协议的有效性,揭示了MLLMs在使用ODE生成的样本进行评估时表现出更高的幻觉率,这表明了潜在的数据污染。此外,这些生成的样本有助于分析幻觉模式和微调模型,为缓解MLLMs中的幻觉提供了一种有效途径。