In this paper, we propose ZeFaV - a zero-shot based fact-checking verification framework to enhance the performance on fact verification task of large language models by leveraging the in-context learning ability of large language models to extract the relations among the entities within a claim, re-organized the information from the evidence in a relationally logical form, and combine the above information with the original evidence to generate the context from which our fact-checking model provide verdicts for the input claims. We conducted empirical experiments to evaluate our approach on two multi-hop fact-checking datasets including HoVer and FEVEROUS, and achieved potential results results comparable to other state-of-the-art fact verification task methods.
翻译:本文提出ZeFaV——一种基于零样本的事实核查验证框架,旨在通过利用大型语言模型的情境学习能力来提升其在事实核查任务中的表现。该框架提取声明中实体间的关系,以关系逻辑形式重组证据信息,并将上述信息与原始证据结合以生成上下文,进而使事实核查模型基于该上下文对输入声明作出判定。我们在HoVer和FEVEROUS两个多跳事实核查数据集上进行了实证实验,取得了与其他先进事实核查方法相当的结果。