The recent development of fact verification systems with natural logic has enhanced their explainability by aligning claims with evidence through set-theoretic operators, providing faithful justifications. Despite these advancements, such systems often rely on a large amount of training data annotated with natural logic. To address this issue, we propose a zero-shot method that utilizes the generalization capabilities of instruction-tuned large language models. To comprehensively assess the zero-shot capabilities of our method and other fact verification systems, we evaluate all models on both artificial and real-world claims, including multilingual datasets. We also compare our method against other fact verification systems in two setups. First, in the zero-shot generalization setup, we demonstrate that our approach outperforms other systems that were not specifically trained on natural logic data, achieving an average accuracy improvement of 8.96 points over the best-performing baseline. Second, in the zero-shot transfer setup, we show that current systems trained on natural logic data do not generalize well to other domains, and our method outperforms these systems across all datasets with real-world claims.
翻译:近期基于自然逻辑的事实核查系统通过集合论操作符将主张与证据对齐,提供了可追溯的论证依据,从而增强了系统的可解释性。尽管取得了这些进展,此类系统通常依赖大量标注了自然逻辑的训练数据。为解决这一问题,我们提出一种零样本方法,该方法利用指令微调后大语言模型的泛化能力。为全面评估本方法及其他事实核查系统的零样本能力,我们在人工构建和真实世界的主张(包括多语言数据集)上对所有模型进行了测试。我们还在两种实验设置中将本方法与其他事实核查系统进行了比较。首先,在零样本泛化设置中,我们证明本方法优于其他未专门在自然逻辑数据上训练的系统,相较于最佳基线模型平均准确率提升了8.96个百分点。其次,在零样本迁移设置中,我们发现当前在自然逻辑数据上训练的系统对其他领域的泛化能力有限,而本方法在所有包含真实世界主张的数据集上均优于这些系统。