Table-based Fact Verification (TFV) aims to extract the entailment relation between statements and structured tables. Existing TFV methods based on small-scaled models suffer from insufficient labeled data and weak zero-shot ability. Recently, the appearance of Large Language Models (LLMs) has gained lots of attraction in research fields. They have shown powerful zero-shot and in-context learning abilities on several NLP tasks, but their potential on TFV is still unknown. In this work, we implement a preliminary study about whether LLMs are table-based fact-checkers. In detail, we design diverse prompts to explore how the in-context learning can help LLMs in TFV, i.e., zero-shot and few-shot TFV capability. Besides, we carefully design and construct TFV instructions to study the performance gain brought by the instruction tuning of LLMs. Experimental results demonstrate that LLMs can achieve acceptable results on zero-shot and few-shot TFV with prompt engineering, while instruction-tuning can stimulate the TFV capability significantly. We also make some valuable findings about the format of zero-shot prompts and the number of in-context examples. Finally, we analyze some possible directions to promote the accuracy of TFV via LLMs, which is beneficial to further research of table reasoning.
翻译:基于表格的事实核查旨在提取陈述与结构化表格之间的蕴含关系。现有的基于小规模模型的TFV方法存在标注数据不足和零样本能力较弱的问题。近年来,大型语言模型的出现引起了研究领域的广泛关注。它们在多项NLP任务中展现出强大的零样本和上下文学习能力,但其在TFV任务上的潜力尚未可知。本研究初步探讨了LLMs是否能够作为基于表格的事实核查器。具体而言,我们设计了多样化的提示模板,以探索上下文学习如何帮助LLMs实现TFV任务,即零样本和少样本TFV能力。此外,我们精心设计并构建了TFV指令集,以研究指令微调对LLMs性能的提升作用。实验结果表明:通过提示工程,LLMs能够在零样本和少样本TFV任务上取得可接受的结果,而指令微调能显著激发模型的TFV能力。我们还针对零样本提示的格式设计及上下文示例数量获得了若干有价值的发现。最后,我们分析了通过LLMs提升TFV准确性的潜在方向,这对表格推理的进一步研究具有积极意义。