Large language models (LLMs) have demonstrated impressive capabilities across a wide range of reasoning tasks, including logical and mathematical problem-solving. While prompt-based methods like Chain-of-Thought (CoT) can enhance LLM reasoning abilities to some extent, they often suffer from a lack of faithfulness, where the derived conclusions may not align with the generated reasoning chain. To address this issue, researchers have explored neuro-symbolic approaches to bolster LLM logical reasoning capabilities. However, existing neuro-symbolic methods still face challenges with information loss during the process. To overcome these limitations, we introduce Iterative Feedback-Driven Neuro-Symbolic (IFDNS), a novel prompt-based method that employs a multi-round feedback mechanism to address LLM limitations in handling complex logical relationships. IFDNS utilizes iterative feedback during the logic extraction phase to accurately extract causal relationship statements and translate them into propositional and logical implication expressions, effectively mitigating information loss issues. Furthermore, IFDNS is orthogonal to existing prompt methods, allowing for seamless integration with various prompting approaches. Empirical evaluations across six datasets demonstrate the effectiveness of IFDNS in significantly improving the performance of CoT and Chain-of-Thought with Self-Consistency (CoT-SC). Specifically, IFDNS achieves a +9.40% accuracy boost for CoT on the LogiQA dataset and a +11.70% improvement for CoT-SC on the PrOntoQA dataset.
翻译:大型语言模型(LLM)已在包括逻辑与数学问题求解在内的广泛推理任务中展现出卓越能力。尽管基于提示的方法(如思维链)能在一定程度上增强LLM的推理能力,但其常存在忠实性不足的问题,即推导结论可能与生成的推理链条不一致。为解决此问题,研究者已探索采用神经符号方法来增强LLM的逻辑推理能力。然而,现有神经符号方法仍面临推理过程中的信息损失挑战。为突破这些局限,我们提出迭代反馈驱动神经符号方法(IFDNS),这是一种基于提示的新型方法,采用多轮反馈机制以应对LLM在处理复杂逻辑关系时的不足。IFDNS在逻辑提取阶段利用迭代反馈机制,精准提取因果关系陈述并将其转化为命题与逻辑蕴含表达式,从而有效缓解信息损失问题。此外,IFDNS与现有提示方法具有正交性,可无缝集成多种提示策略。在六个数据集上的实证评估表明,IFDNS能显著提升思维链及自洽思维链方法的性能。具体而言,在LogiQA数据集上,IFDNS使思维链的准确率提升9.40%;在PrOntoQA数据集上,IFDNS使自洽思维链的准确率提升11.70%。