Large language models (LLMs) have generated significant attention since their inception, finding applications across various academic and industrial domains. However, these models often suffer from the "hallucination problem", where outputs, though grammatically and logically coherent, lack factual accuracy or are entirely fabricated. A particularly troubling issue discovered and widely discussed recently is the numerical comparison error where multiple LLMs incorrectly infer that "9.11$>$9.9". We discovered that the order in which LLMs generate answers and reasoning impacts their consistency. Specifically, results vary significantly when an LLM generates an answer first and then provides the reasoning versus generating the reasoning process first and then the conclusion. Inspired by this, we propose a new benchmark method for assessing LLM consistency: comparing responses generated through these two different approaches. This benchmark effectively identifies instances where LLMs fabricate answers and subsequently generate justifications. Furthermore, we introduce a novel and straightforward prompt strategy designed to mitigate this issue. Experimental results demonstrate that this strategy improves performance across various LLMs compared to direct questioning. This work not only sheds light on a critical flaw in LLMs but also offers a practical solution to enhance their reliability.
翻译:大型语言模型(LLMs)自问世以来引发了广泛关注,在学术界和工业界的多个领域得到应用。然而,这些模型常受"幻觉问题"困扰,即输出结果虽在语法和逻辑上连贯,却缺乏事实准确性或完全虚构。近期发现并广泛讨论的一个尤为棘手的问题是数值比较错误,即多个LLMs错误推断"9.11$>$9.9"。我们发现LLMs生成答案与推理的顺序会影响其一致性。具体而言,当LLM先生成答案再提供推理过程,与先生成推理过程再得出结论时,结果存在显著差异。受此启发,我们提出一种评估LLM一致性的新基准方法:比较通过这两种不同方式生成的响应。该基准能有效识别LLMs虚构答案并随后生成论证的情况。此外,我们引入一种新颖且简洁的提示策略以缓解此问题。实验结果表明,相较于直接提问,该策略能提升多种LLMs的性能。这项工作不仅揭示了LLMs的关键缺陷,也为提升其可靠性提供了实用解决方案。