Large language models (LLMs) demonstrate substantial capabilities in solving math problems. However, they tend to produce hallucinations when given questions containing unreasonable errors. In this paper, we study the behavior of LLMs when faced with unreasonable math problems and further explore their potential to address these problems. We construct the Unreasonable Math Problem (UMP) benchmark to examine the error detection ability of LLMs. Experiments show that LLMs are able to detect unreasonable errors, but still fail in generating non-hallucinatory content. In order to improve their ability of error detection and correction, we further design a strategic prompt template called Critical Calculation and Conclusion(CCC). With CCC, LLMs can better self-evaluate and detect unreasonable errors in math questions, making them more reliable and safe in practical application scenarios.
翻译:大语言模型在解决数学问题方面展现出显著能力。然而,当面对包含不合理错误的提问时,它们往往会产生幻觉。本文研究了LLMs在面对不合理数学问题时的行为表现,并进一步探索其解决此类问题的潜力。我们构建了不合理数学问题基准测试集,以检验LLMs的错误检测能力。实验表明,LLMs能够检测不合理错误,但在生成非幻觉内容方面仍然存在不足。为提升其错误检测与修正能力,我们进一步设计了一种名为关键计算与结论的策略性提示模板。借助该模板,LLMs能够更好地进行自我评估并检测数学问题中的不合理错误,从而在实际应用场景中变得更可靠、更安全。