Large vision-language models (LVLMs) have demonstrated remarkable capabilities in multimodal understanding and generation tasks. However, these models occasionally generate hallucinatory texts, resulting in descriptions that seem reasonable but do not correspond to the image. This phenomenon can lead to wrong driving decisions of the autonomous driving system. To address this challenge, this paper proposes HCOENet, a plug-and-play chain-of-thought correction method designed to eliminate object hallucinations and generate enhanced descriptions for critical objects overlooked in the initial response. Specifically, HCOENet employs a cross-checking mechanism to filter entities and directly extracts critical objects from the given image, enriching the descriptive text. Experimental results on the POPE benchmark demonstrate that HCOENet improves the F1-score of the Mini-InternVL-4B and mPLUG-Owl3 models by 12.58% and 4.28%, respectively. Additionally, qualitative results using images collected in open campus scene further highlight the practical applicability of the proposed method. Compared with the GPT-4o model, HCOENet achieves comparable descriptive performance while significantly reducing costs. Finally, two novel semantic understanding datasets, CODA_desc and nuScenes_desc, are created for traffic scenarios to support future research. The codes and datasets are publicly available at https://github.com/fjq-tongji/HCOENet.
翻译:大型视觉语言模型在多模态理解与生成任务中展现出卓越能力。然而,这些模型偶尔会产生幻觉文本,导致生成看似合理但与图像内容不符的描述。该现象可能引发自动驾驶系统做出错误驾驶决策。为解决这一挑战,本文提出HCOENet——一种即插即用的思维链校正方法,旨在消除对象幻觉并为初始响应中忽略的关键对象生成增强描述。具体而言,HCOENet采用交叉验证机制过滤实体,并直接从给定图像中提取关键对象以丰富描述文本。在POPE基准测试上的实验结果表明,HCOENet将Mini-InternVL-4B和mPLUG-Owl3模型的F1分数分别提升了12.58%和4.28%。此外,基于开放校园场景采集图像的定性结果进一步凸显了该方法的实际适用性。与GPT-4o模型相比,HCOENet在实现相当描述性能的同时显著降低了成本。最后,本文为交通场景创建了两个新颖的语义理解数据集CODA_desc与nuScenes_desc以支持未来研究。代码与数据集已公开于https://github.com/fjq-tongji/HCOENet。