Hallucinations in vision-language models pose a significant challenge to their reliability, particularly in the generation of long captions. Current methods fall short of accurately identifying and mitigating these hallucinations. To address this issue, we introduce ESREAL, a novel unsupervised learning framework designed to suppress the generation of hallucinations through accurate localization and penalization of hallucinated tokens. Initially, ESREAL creates a reconstructed image based on the generated caption and aligns its corresponding regions with those of the original image. This semantic reconstruction aids in identifying both the presence and type of token-level hallucinations within the generated caption. Subsequently, ESREAL computes token-level hallucination scores by assessing the semantic similarity of aligned regions based on the type of hallucination. Finally, ESREAL employs a proximal policy optimization algorithm, where it selectively penalizes hallucinated tokens according to their token-level hallucination scores. Our framework notably reduces hallucinations in LLaVA, InstructBLIP, and mPLUG-Owl2 by 32.81%, 27.08%, and 7.46% on the CHAIR metric. This improvement is achieved solely through signals derived from the image itself, without the need for any image-text pairs.
翻译:视觉语言模型中的幻觉对其可靠性构成重大挑战,尤其是在生成长描述时。现有方法难以准确识别并缓解这些幻觉。为此,我们提出ESREAL——一种新颖的无监督学习框架,通过精准定位和惩罚幻觉标记来抑制其生成。首先,ESREAL根据生成的描述重建图像,并将重建图像的对应区域与原始图像对齐。这种语义重构有助于识别生成描述中标记级幻觉的存在及其类型。随后,ESREAL基于对齐区域的语义相似性,根据幻觉类型计算标记级幻觉分数。最后,ESREAL采用近端策略优化算法,根据标记级幻觉分数选择性地惩罚幻觉标记。在CHAIR指标上,我们的框架使LLaVA、InstructBLIP和mPLUG-Owl2的幻觉分别显著降低32.81%、27.08%和7.46%。这一改进完全依赖于图像本身信号,无需任何图像-文本配对数据。