Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content. In order to mitigate hallucinations, existing studies mainly resort to an instruction-tuning manner that requires retraining the models with specific data. In this paper, we pave a different way, introducing a training-free method named Woodpecker. Like a woodpecker heals trees, it picks out and corrects hallucinations from the generated text. Concretely, Woodpecker consists of five stages: key concept extraction, question formulation, visual knowledge validation, visual claim generation, and hallucination correction. Implemented in a post-remedy manner, Woodpecker can easily serve different MLLMs, while being interpretable by accessing intermediate outputs of the five stages. We evaluate Woodpecker both quantitatively and qualitatively and show the huge potential of this new paradigm. On the POPE benchmark, our method obtains a 30.66%/24.33% improvement in accuracy over the baseline MiniGPT-4/mPLUG-Owl. The source code is released at https://github.com/BradyFU/Woodpecker.
翻译:幻觉是快速发展中的多模态大语言模型(MLLMs)上空笼罩的一大阴影,指生成文本与图像内容不一致的现象。为缓解幻觉问题,现有研究主要采用指令微调方式,需要使用特定数据重新训练模型。本文另辟蹊径,提出一种名为Woodpecker的无训练方法。正如啄木鸟治愈树木,该方法从生成文本中识别并修正幻觉。具体而言,Woodpecker包含五个阶段:关键概念提取、问题构建、视觉知识验证、视觉主张生成及幻觉校正。以后端修正方式实现的Woodpecker可轻松适配不同MLLMs,同时通过访问五个阶段的中间输出实现可解释性。我们通过定量与定性评估验证了Woodpecker的有效性,展示了这一新范式的巨大潜力。在POPE基准测试中,本方法相较于基线模型MiniGPT-4/mPLUG-Owl在准确率上分别获得30.66%/24.33%的提升。源代码发布于https://github.com/BradyFU/Woodpecker。