This paper introduces MalAlgoQA, a novel dataset designed to evaluate the counterfactual reasoning capabilities of Large Language Models (LLMs) through a pedagogical approach. The dataset comprises mathematics and reading comprehension questions, each accompanied by four answer choices and their corresponding rationales. We focus on the incorrect answer rationales, termed "malgorithms", which highlights flawed reasoning steps leading to incorrect answers and offers valuable insights into erroneous thought processes. We also propose the Malgorithm Identification task, where LLMs are assessed based on their ability to identify corresponding malgorithm given an incorrect answer choice. To evaluate the model performance, we introduce two metrics: Algorithm Identification Accuracy (AIA) for correct answer rationale identification, and Malgorithm Identification Accuracy (MIA) for incorrect answer rationale identification. The task is challenging since state-of-the-art LLMs exhibit significant drops in MIA as compared to AIA. Moreover, we find that the chain-of-thought prompting technique not only fails to consistently enhance MIA, but can also lead to underperformance compared to simple prompting. These findings hold significant implications for the development of more cognitively-inspired LLMs to improve their counterfactual reasoning abilities, particularly through a pedagogical perspective where understanding and rectifying student misconceptions are crucial.
翻译:本文介绍了一种新颖的数据集MalAlgoQA,旨在通过教学法方法评估大型语言模型(LLMs)的反事实推理能力。该数据集包含数学和阅读理解问题,每个问题均配有四个答案选项及其对应的推理依据。我们重点关注错误的答案推理依据,即“错误算法”(malgorithms),它突显了导致错误答案的缺陷推理步骤,并为理解错误思维过程提供了宝贵见解。我们还提出了“错误算法识别”任务,该任务根据LLMs在给定错误答案选项时识别相应错误算法的能力进行评估。为评估模型性能,我们引入了两个指标:用于正确答案推理依据识别的“算法识别准确率”(AIA),以及用于错误答案推理依据识别的“错误算法识别准确率”(MIA)。该任务具有挑战性,因为最先进的LLMs在MIA上相较于AIA表现出显著下降。此外,我们发现思维链提示技术不仅无法持续提升MIA,甚至可能导致其表现逊于简单提示。这些发现对于开发更具认知启发性的LLMs以提升其反事实推理能力具有重要意义,特别是从教学视角出发,其中理解和纠正学生的错误概念至关重要。