Large language models (LLMs) have recently shown impressive performance on tasks involving reasoning, leading to a lively debate on whether these models possess reasoning capabilities similar to humans. However, despite these successes, the depth of LLMs' reasoning abilities remains uncertain. This uncertainty partly stems from the predominant focus on task performance, measured through shallow accuracy metrics, rather than a thorough investigation of the models' reasoning behavior. This paper seeks to address this gap by providing a comprehensive review of studies that go beyond task accuracy, offering deeper insights into the models' reasoning processes. Furthermore, we survey prevalent methodologies to evaluate the reasoning behavior of LLMs, emphasizing current trends and efforts towards more nuanced reasoning analyses. Our review suggests that LLMs tend to rely on surface-level patterns and correlations in their training data, rather than on genuine reasoning abilities. Additionally, we identify the need for further research that delineates the key differences between human and LLM-based reasoning. Through this survey, we aim to shed light on the complex reasoning processes within LLMs.
翻译:大语言模型(LLMs)近期在涉及推理的任务上展现出令人瞩目的表现,引发了关于这些模型是否具备类似人类推理能力的激烈辩论。然而,尽管取得了这些成功,LLMs推理能力的深度仍不确定。这种不确定性部分源于主流关注点集中在任务性能上,这种性能通过浅层的准确率指标衡量,而非对模型推理行为的深入探究。本文旨在通过综述那些超越任务准确率的研究来填补这一空白,为模型的推理过程提供更深入的见解。此外,我们调查了评估LLMs推理行为的常见方法,强调了当前朝着更细致推理分析的趋势和努力。我们的综述表明,LLMs倾向于依赖其训练数据中的表面模式和相关性,而非真正的推理能力。同时,我们识别出需要进一步研究以明确人类与基于LLM的推理之间的关键差异。通过这项综述,我们旨在揭示LLMs内部复杂的推理过程。