Studies have underscored how, regardless of the recent breakthrough and swift advances in AI research, even state-of-the-art Large Language models (LLMs) continue to struggle when performing logical and mathematical reasoning. The results seem to suggest that LLMs still work as (highly advanced) data pattern identifiers, scoring poorly when attempting to generalise and solve reasoning problems the models have never previously seen or that are not close to samples presented in their training data. To address this compelling concern, this paper makes use of the notion of critical questions from the literature on argumentation theory, focusing in particular on Toulmin's model of argumentation. We show that employing these critical questions can improve the reasoning capabilities of LLMs. By probing the rationale behind the models' reasoning process, the LLM can assess whether some logical mistake is occurring and correct it before providing the final reply to the user prompt. The underlying idea is drawn from the gold standard of any valid argumentative procedure: the conclusion is valid if it is entailed by accepted premises. Or, to paraphrase such Aristotelian principle in a real-world approximation, characterised by incomplete information and presumptive logic, the conclusion is valid if not proved otherwise. This approach successfully steers the models' output through a reasoning pipeline, resulting in better performance against the baseline and its Chain-of-Thought (CoT) implementation. To this end, an extensive evaluation of the proposed approach on the MT-Bench Reasoning and Math tasks across a range of LLMs is provided.
翻译:研究表明,尽管人工智能研究近期取得突破性进展并快速发展,但即使是先进的大语言模型(LLMs)在执行逻辑与数学推理时仍面临困难。这些结果似乎表明,LLMs本质上仍作为(高度先进的)数据模式识别器运作,当尝试泛化解决模型未曾见过或与训练数据样本差异较大的推理问题时表现欠佳。为应对这一关键问题,本文借鉴论证理论文献中的批判性问题概念,特别聚焦于图尔敏论证模型。我们证明运用这些批判性问题能够增强LLMs的推理能力。通过探查模型推理过程的逻辑依据,LLMs能够评估是否存在逻辑谬误并在最终回复用户提示前予以修正。该方法的核心思想源于有效论证程序的黄金标准:若结论由公认前提所蕴含,则论证有效。或以现实世界中信息不完整且采用推定逻辑的情境来近似阐释这一亚里士多德原则:若结论未被证伪,则视为有效。该方法成功通过推理流程引导模型输出,在基准测试及其思维链(CoT)实现方案上均展现出更优性能。为此,我们在MT-Bench推理与数学任务上对多种LLMs进行了所提方法的系统性评估。