Large language models (LLMs) demonstrate substantial capabilities in solving math problems. However, they tend to produce hallucinations when given questions containing unreasonable errors. In this paper, we study the behavior of LLMs when faced with unreasonable math problems and further explore their potential to address these problems. We construct the Unreasonable Math Problem (UMP) benchmark to examine the error detection ability of LLMs. Experiments show that LLMs are able to detect unreasonable errors, but still fail in generating non-hallucinatory content. In order to improve their ability of error detection and correction, we further design a strategic prompt template called Critical Calculation and Conclusion(CCC). With CCC, LLMs can better self-evaluate and detect unreasonable errors in math questions, making them more reliable and safe in practical application scenarios.
翻译:大型语言模型在解决数学问题方面表现出显著能力。然而,当面对包含不合理错误的问题时,它们容易产生幻觉。本文研究了大型语言模型在面对不合理数学问题时的行为,并进一步探索了它们解决这些问题的潜力。我们构建了非理性数学问题基准测试,以检验大型语言模型的错误检测能力。实验表明,大型语言模型能够检测到不合理错误,但在生成非幻觉内容方面仍然失败。为了提高其错误检测与纠正能力,我们进一步设计了一种名为“关键计算与结论”的策略性提示模板。通过该模板,大型语言模型能够更好地进行自我评估并检测数学问题中的不合理错误,从而使其在实际应用场景中更加可靠与安全。