Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning, particularly in novel domains and complex logical sequences. This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs. Our approach bridges LLM-generated ideas with formal logic verification, employing a custom interpreter to convert LLM outputs into First Order Logic constructs for theorem prover scrutiny. Central to our method is an intermediary JSON-based Domain-Specific Language, which by design balances precise logical structures with intuitive human concepts. This hybrid representation enables both rigorous validation and accessible human comprehension of LLM reasoning processes. Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge, and a flexible architecture that allows for easy extension to various domain-specific applications. We demonstrate Proof of Thought's effectiveness through benchmarking on StrategyQA and a novel multimodal reasoning task, showing improved performance in open-ended scenarios. By providing verifiable and interpretable results, our technique addresses critical needs for AI system accountability and sets a foundation for human-in-the-loop oversight in high-stakes domains.
翻译:大型语言模型(LLMs)已彻底变革自然语言处理领域,但其在推理一致性方面仍存在不足,尤其是在新颖领域和复杂逻辑序列中。本研究提出"思维证明"框架,旨在提升LLM输出的可靠性与透明度。该方法通过定制解释器将LLM输出转化为一阶逻辑结构以供定理证明器检验,从而在LLM生成的思想与形式逻辑验证之间建立桥梁。我们方法的核心是一种基于JSON的中间领域特定语言,其设计在精确逻辑结构与直观人类概念之间取得平衡。这种混合表示既支持对LLM推理过程的严格验证,又保持人类可理解性。关键创新包括:通过类型排序管理增强逻辑完整性的稳健类型系统、明确区分事实性知识与推理性知识的规则显式表示,以及可轻松扩展至不同领域应用的灵活架构。通过在StrategyQA和新型多模态推理任务上的基准测试,我们证明了思维证明框架在开放场景中提升性能的有效性。通过提供可验证且可解释的结果,本技术满足了人工智能系统问责制的关键需求,并为高风险领域中人机协同监管奠定了基础。