The emergence of Large Language Models (LLMs) has demonstrated promising progress in solving logical reasoning tasks effectively. Several recent approaches have proposed to change the role of the LLM from the reasoner into a translator between natural language statements and symbolic representations which are then sent to external symbolic solvers to resolve. This paradigm has established the current state-of-the-art result in logical reasoning (i.e., deductive reasoning). However, it remains unclear whether the variance in performance of these approaches stems from the methodologies employed or the specific symbolic solvers utilized. There is a lack of consistent comparison between symbolic solvers and how they influence the overall reported performance. This is important, as each symbolic solver also has its own input symbolic language, presenting varying degrees of challenge in the translation process. To address this gap, we perform experiments on 3 deductive reasoning benchmarks with LLMs augmented with widely used symbolic solvers: Z3, Pyke, and Prover9. The tool-executable rates of symbolic translation generated by different LLMs exhibit a near 50% performance variation. This highlights a significant difference in performance rooted in very basic choices of tools. The almost linear correlation between the executable rate of translations and the accuracy of the outcomes from Prover9 highlight a strong alignment between LLMs ability to translate into Prover9 symbolic language, and the correctness of those translations.
翻译:大型语言模型(LLM)的出现展现了其在有效解决逻辑推理任务方面的良好进展。近期的一些方法提出将LLM的角色从推理器转变为自然语言陈述与符号表示之间的翻译器,生成的符号表示随后被发送至外部符号求解器进行解析。这一范式已在逻辑推理(即演绎推理)领域确立了当前最先进的成果。然而,这些方法在性能上的差异究竟源于所采用的方法论,还是所使用的特定符号求解器,目前尚不明确。现有研究缺乏对符号求解器及其如何影响整体报告性能的一致性比较。这一点至关重要,因为每个符号求解器也拥有其自身的输入符号语言,这给翻译过程带来了不同程度的挑战。为弥补这一空白,我们在三个演绎推理基准测试上进行了实验,测试了结合广泛使用的符号求解器(Z3、Pyke和Prover9)的LLM。不同LLM生成的符号翻译其工具可执行率表现出接近50%的性能差异。这突显了基于非常基础的工具选择所带来的显著性能差别。翻译可执行率与Prover9输出结果准确性之间近乎线性的相关性,凸显了LLM翻译成Prover9符号语言的能力与这些翻译的正确性之间存在高度一致性。