Chain-of-Thought (CoT) prompting has emerged as a pivotal technique for augmenting the inferential capabilities of language models during reasoning tasks. Despite its advancements, CoT often grapples with challenges in validating reasoning validity and ensuring informativeness. Addressing these limitations, this paper introduces the Logic Agent (LA), an agent-based framework aimed at enhancing the validity of reasoning processes in Large Language Models (LLMs) through strategic logic rule invocation. Unlike conventional approaches, LA transforms LLMs into logic agents that dynamically apply propositional logic rules, initiating the reasoning process by converting natural language inputs into structured logic forms. The logic agent leverages a comprehensive set of predefined functions to systematically navigate the reasoning process. This methodology not only promotes the structured and coherent generation of reasoning constructs but also significantly improves their interpretability and logical coherence. Through extensive experimentation, we demonstrate LA's capacity to scale effectively across various model sizes, markedly improving the precision of complex reasoning across diverse tasks.
翻译:思维链提示已成为增强语言模型在推理任务中推断能力的关键技术。尽管取得了进展,思维链方法在验证推理有效性及确保信息丰富性方面仍面临挑战。针对这些局限性,本文提出逻辑智能体,一种基于智能体的框架,旨在通过策略性逻辑规则调用来增强大型语言模型推理过程的有效性。与传统方法不同,逻辑智能体将大型语言模型转化为能够动态应用命题逻辑规则的逻辑智能体,通过将自然语言输入转换为结构化逻辑形式来启动推理过程。该逻辑智能体利用一组预定义的完备函数来系统化地导航推理过程。此方法不仅促进了推理结构的系统化与连贯生成,还显著提升了其可解释性与逻辑一致性。通过大量实验,我们证明了逻辑智能体能够有效适应不同模型规模,并在多样化任务中显著提升复杂推理的精确度。