Large language models (LLMs) have generated significant attention since their inception, finding applications across various academic and industrial domains. However, these models often suffer from the "hallucination problem", where outputs, though grammatically and logically coherent, lack factual accuracy or are entirely fabricated. A particularly troubling issue discovered and widely discussed recently is the numerical comparison error where multiple LLMs incorrectly infer that "9.11$>$9.9". We discovered that the order in which LLMs generate answers and reasoning impacts their consistency. Specifically, results vary significantly when an LLM generates an answer first and then provides the reasoning versus generating the reasoning process first and then the conclusion. Inspired by this, we propose a new benchmark method for assessing LLM consistency: comparing responses generated through these two different approaches. This benchmark effectively identifies instances where LLMs fabricate answers and subsequently generate justifications. Furthermore, we introduce a novel and straightforward prompt strategy designed to mitigate this issue. Experimental results demonstrate that this strategy improves performance across various LLMs compared to direct questioning. This work not only sheds light on a critical flaw in LLMs but also offers a practical solution to enhance their reliability.
翻译:大型语言模型自问世以来引起了广泛关注,在学术界和工业界的多个领域得到了应用。然而,这些模型常常受到"幻觉问题"的困扰,即输出虽然在语法和逻辑上连贯,但缺乏事实准确性或完全捏造。近期发现并广泛讨论的一个特别令人困扰的问题是数值比较错误,即多个大型语言模型错误地推断"9.11$>$9.9"。我们发现,大型语言模型生成答案和推理的顺序会影响其一致性。具体而言,当模型先生成答案再提供推理,与先生成推理过程再得出结论时,结果存在显著差异。受此启发,我们提出一种评估大型语言模型一致性的新基准方法:比较通过这两种不同方式生成的响应。该基准能有效识别大型语言模型捏造答案并随后生成论证的情况。此外,我们引入了一种新颖且简单的提示策略,旨在缓解此问题。实验结果表明,与直接提问相比,该策略能提升多种大型语言模型的性能。这项工作不仅揭示了大型语言模型的一个关键缺陷,还为提升其可靠性提供了实用解决方案。