Large language models (LLMs) are highly effective in various natural language processing (NLP) tasks. However, they are susceptible to producing unreliable conjectures in ambiguous contexts called hallucination. This paper presents a new method for evaluating LLM hallucination in Question Answering (QA) based on the unanswerable math word problem (MWP). To support this approach, we innovatively develop a dataset called Unanswerable Math Word Problem (UMWP) which comprises 5200 questions across five categories. We developed an evaluation methodology combining text similarity and mathematical expression detection to determine whether LLM considers the question unanswerable. The results of extensive experiments conducted on 31 LLMs, including GPT-3, InstructGPT, LLaMA, and Claude, demonstrate that in-context learning and reinforcement learning with human feedback (RLHF) training significantly enhance the model's ability to avoid hallucination. We show that utilizing MWP is a reliable and effective approach to assess hallucination. Our code and data are available at https://github.com/Yuki-Asuuna/UMWP.
翻译:大型语言模型(LLMs)在各种自然语言处理(NLP)任务中表现出极高效果,但在模糊语境下易产生不可靠推断,即所谓的"幻觉"。本文提出一种基于不可解答数学文字题(Unanswerable Math Word Problem, MWP)评估LLM问答(QA)中幻觉的新方法。为此,我们创新性地构建了名为"不可解答数学文字题"(UMWP)的数据集,该数据集包含5200道跨五个类别的试题。我们融合文本相似度检测与数学表达式检测,开发了一套评估方法论,用以判定LLM是否识别出试题的不可解答性。在涵盖GPT-3、InstructGPT、LLaMA和Claude等31个LLM的广泛实验中,结果表明基于上下文的学习与结合人类反馈的强化学习(RLHF)训练显著提升了模型规避幻觉的能力。我们证明利用MWP是评估幻觉的可靠且有效途径。相关代码与数据已开源至https://github.com/Yuki-Asuuna/UMWP。