Numerous benchmarks aim to evaluate the capabilities of Large Language Models (LLMs) for causal inference and reasoning. However, many of them can likely be solved through the retrieval of domain knowledge, questioning whether they achieve their purpose. In this review, we present a comprehensive overview of LLM benchmarks for causality. We highlight how recent benchmarks move towards a more thorough definition of causal reasoning by incorporating interventional or counterfactual reasoning. We derive a set of criteria that a useful benchmark or set of benchmarks should aim to satisfy. We hope this work will pave the way towards a general framework for the assessment of causal understanding in LLMs and the design of novel benchmarks.
翻译:众多基准旨在评估大型语言模型(LLMs)的因果推断与推理能力。然而,其中许多基准很可能通过检索领域知识即可解决,这对其是否达成评估目的提出了质疑。本综述对面向因果关系的LLM基准进行了全面概述。我们重点阐述了近期基准如何通过纳入干预性或反事实推理,朝着更全面的因果推理定义发展。我们提出了一套有效基准或基准集合应力求满足的标准。我们希望这项工作能为建立评估LLMs因果理解能力的通用框架以及设计新型基准奠定基础。