Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world. While large language models (LLMs) can generate rationales for their outputs, their ability to reliably perform causal reasoning remains uncertain, often falling short in tasks requiring a deep understanding of causality. In this survey, we provide a comprehensive review of research aimed at enhancing LLMs for causal reasoning. We categorize existing methods based on the role of LLMs: either as reasoning engines or as helpers providing knowledge or data to traditional CR methods, followed by a detailed discussion of the methodologies in each category. We then evaluate the performance of LLMs on various causal reasoning tasks, providing key findings and in-depth analysis. Finally, we provide insights from current studies and highlight promising directions for future research. We aim for this work to serve as a comprehensive resource, fostering further advancements in causal reasoning with LLMs. Resources are available at https://github.com/chendl02/Awesome-LLM-causal-reasoning.
翻译:因果推理(CR)是智能的关键组成部分,对于问题解决、决策制定以及理解世界至关重要。尽管大型语言模型(LLM)能够为其输出生成推理依据,但其可靠执行因果推理的能力仍不确定,在需要深入理解因果关系的任务中往往表现不足。本综述全面回顾了旨在增强LLM因果推理能力的研究。我们根据LLM在其中的角色对现有方法进行分类:即作为推理引擎,或作为为传统CR方法提供知识或数据的辅助工具,随后详细讨论了各类别中的方法学。接着,我们评估了LLM在各种因果推理任务上的性能,提供了关键发现和深入分析。最后,我们总结了当前研究的启示,并指出了未来研究的有前景方向。我们希望这项工作能成为一个全面的资源,促进LLM因果推理领域的进一步发展。相关资源可在 https://github.com/chendl02/Awesome-LLM-causal-reasoning 获取。