Deductive reasoning is a crucial logical capability that assists us in solving complex problems based on existing knowledge. Although augmented by Chain-of-Thought prompts, Large Language Models (LLMs) might not follow the correct reasoning paths. Enhancing the deductive reasoning abilities of LLMs, and leveraging their extensive built-in knowledge for various reasoning tasks, remains an open question. Attempting to mimic the human deductive reasoning paradigm, we propose a multi-stage Syllogistic-Reasoning Framework of Thought (SR-FoT) that enables LLMs to perform syllogistic deductive reasoning to handle complex knowledge-based reasoning tasks. Our SR-FoT begins by interpreting the question and then uses the interpretation and the original question to propose a suitable major premise. It proceeds by generating and answering minor premise questions in two stages to match the minor premises. Finally, it guides LLMs to use the previously generated major and minor premises to perform syllogistic deductive reasoning to derive the answer to the original question. Extensive and thorough experiments on knowledge-based reasoning tasks have demonstrated the effectiveness and advantages of our SR-FoT.
翻译:演绎推理是一种关键的逻辑能力,能帮助我们基于已有知识解决复杂问题。尽管通过思维链提示得到增强,大语言模型可能仍无法遵循正确的推理路径。如何提升大语言模型的演绎推理能力,并利用其内置的丰富知识完成各类推理任务,仍是一个开放性问题。为尝试模拟人类的演绎推理范式,我们提出了一种多阶段的三段论推理思维框架,使大语言模型能够执行三段论演绎推理以处理复杂的知识推理任务。我们的SR-FoT首先解析问题,然后利用解析结果和原始问题提出合适的大前提;接着通过两个阶段生成并回答小前提问题以匹配小前提;最后引导大语言模型使用先前生成的大前提和小前提进行三段论演绎推理,从而推导出原始问题的答案。在知识推理任务上进行的大量深入实验验证了我们SR-FoT的有效性和优势。