Natural language explanations have become a proxy for evaluating explainable and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logical errors. To address existing limitations, this paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs). Specifically, we present a neuro-symbolic framework, named Explanation-Refiner, that augments a TP with LLMs to generate and formalise explanatory sentences and suggest potential inference strategies for NLI. In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements. We demonstrate how Explanation-Refiner can be jointly used to evaluate explanatory reasoning, autoformalisation, and error correction mechanisms of state-of-the-art LLMs as well as to automatically enhance the quality of human-annotated explanations of variable complexity in different domains.
翻译:自然语言解释已成为评估可解释性与多步骤自然语言推理(NLI)模型的代理指标。然而,验证NLI解释的有效性面临挑战,因其通常涉及对恰适数据集的众包处理,这一过程既耗时又易出现逻辑错误。为突破现有局限,本文探究通过融合大语言模型(LLMs)与定理证明器(TPs)对自然语言解释进行验证与精炼的方法。具体而言,我们提出名为Explanation-Refiner的神经符号框架,该框架通过LLMs增强TP功能,用于生成并形式化解释性句子,同时为NLI提出潜在推理策略。相应地,TP被用于提供解释逻辑有效性的形式化保证,并为后续改进生成反馈。我们展示了如何联合运用Explanation-Refiner评估前沿LLM的解释性推理、自动形式化及纠错机制,并自动提升不同领域复杂程度各异的人工标注解释质量。