Recently, increasing attention has been focused drawn on to improve the ability of Large Language Models (LLMs) to perform complex reasoning. However, previous methods, such as Chain-of-Thought and Self-Consistency, mainly follow Direct Reasoning (DR) frameworks, so they will meet difficulty in solving numerous real-world tasks which can hardly be solved via DR. Therefore, to strengthen the reasoning power of LLMs, this paper proposes a novel Indirect Reasoning (IR) method that employs the logic of contrapositives and contradictions to tackle IR tasks such as factual reasoning and mathematic proof. Specifically, our methodology comprises two steps. Firstly, we leverage the logical equivalence of contrapositive to augment the data and rules to enhance the comprehensibility of LLMs. Secondly, we design a set of prompt templates to trigger LLMs to conduct IR based on proof by contradiction that is logically equivalent to the original DR process. Our IR method is simple yet effective and can be straightforwardly integrated with existing DR methods to further boost the reasoning abilities of LLMs. The experimental results on popular LLMs, such as GPT-3.5-turbo and Gemini-pro, show that our IR method enhances the overall accuracy of factual reasoning by 27.33% and mathematical proof by 31.43%, when compared with traditional DR methods. Moreover, the methods combining IR and DR significantly outperform the methods solely using IR or DR, further demonstrating the effectiveness of our strategy.
翻译:近年来,提升大语言模型执行复杂推理能力的研究日益受到关注。然而,思维链与自洽性等既有方法主要遵循直接推理框架,难以解决众多无法通过直接推理完成的现实任务。为此,本文提出一种新颖的间接推理方法,通过运用逆否命题逻辑与矛盾律来强化大语言模型的推理能力,以应对事实推理与数学证明等间接推理任务。具体而言,本方法包含两个步骤:首先利用逆否命题的逻辑等价性增强数据与规则的可理解性;其次设计一系列提示模板,引导大语言模型基于与原始直接推理过程逻辑等价的矛盾证明法进行间接推理。该方法简洁高效,可直接与现有直接推理方法结合以进一步提升大语言模型的推理能力。在GPT-3.5-turbo与Gemini-pro等主流大语言模型上的实验结果表明:相较于传统直接推理方法,本间接推理方法使事实推理的总体准确率提升27.33%,数学证明准确率提升31.43%。此外,结合间接推理与直接推理的方法显著优于单独使用间接推理或直接推理的方法,进一步验证了本策略的有效性。