Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks. Despite its advancements, CoT often grapples with challenges in validating reasoning validity and ensuring informativeness. Addressing these limitations, this paper introduces the Logic Agent (LA), an agent-based framework aimed at enhancing the validity of reasoning processes in Large Language Models (LLMs) through strategic logic rule invocation. Unlike conventional approaches, LA transforms LLMs into logic agents that dynamically apply propositional logic rules, initiating the reasoning process by converting natural language inputs into structured logic forms. The logic agent leverages a comprehensive set of predefined functions to systematically navigate the reasoning process. This methodology not only promotes the structured and coherent generation of reasoning constructs but also significantly improves their interpretability and logical coherence. Through extensive experimentation, we demonstrate LA's capacity to scale effectively across various model sizes, markedly improving the precision of complex reasoning across diverse tasks.
翻译:链式思维(Chain-of-Thought, CoT)提示已成为增强语言模型在推理任务中推理能力的关键技术。尽管取得了进展,但CoT在验证推理有效性和确保信息丰富性方面仍面临挑战。针对这些局限性,本文提出逻辑智能体(Logic Agent, LA),一种基于智能体的框架,旨在通过策略性逻辑规则调用增强大型语言模型(LLMs)推理过程的效度。与传统方法不同,LA将LLMs转化为能够动态应用命题逻辑规则的逻辑智能体,通过将自然语言输入转化为结构化逻辑形式来启动推理过程。该逻辑智能体利用一套预定义的函数集系统性地引导推理进程。该方法不仅促进了推理结构的有序生成和一致性,还显著提升了其可解释性与逻辑连贯性。通过大量实验,我们证明了LA在不同规模模型上的有效扩展能力,显著提升了复杂推理任务跨多种场景的精确性。