The increasing prevalence of online misinformation has heightened the demand for automated fact-checking solutions. Large Language Models (LLMs) have emerged as potential tools for assisting in this task, but their effectiveness remains uncertain. This study evaluates the fact-checking capabilities of various open-source LLMs, focusing on their ability to assess claims with different levels of contextual information. We conduct three key experiments: (1) evaluating whether LLMs can identify the semantic relationship between a claim and a fact-checking article, (2) assessing models' accuracy in verifying claims when given a related fact-checking article, and (3) testing LLMs' fact-checking abilities when leveraging data from external knowledge sources such as Google and Wikipedia. Our results indicate that LLMs perform well in identifying claim-article connections and verifying fact-checked stories but struggle with confirming factual news, where they are outperformed by traditional fine-tuned models such as RoBERTa. Additionally, the introduction of external knowledge does not significantly enhance LLMs' performance, calling for more tailored approaches. Our findings highlight both the potential and limitations of LLMs in automated fact-checking, emphasizing the need for further refinements before they can reliably replace human fact-checkers.
翻译:在线虚假信息的日益泛滥加剧了对自动化事实核查解决方案的需求。大语言模型(LLMs)已成为协助完成此任务的潜在工具,但其有效性仍不确定。本研究评估了多种开源LLMs的事实核查能力,重点关注它们在不同程度上下文信息下评估主张的能力。我们进行了三项关键实验:(1)评估LLMs能否识别主张与事实核查文章之间的语义关系;(2)在提供相关事实核查文章时,评估模型验证主张的准确性;(3)测试LLMs在利用来自外部知识源(如Google和Wikipedia)数据时的事实核查能力。我们的结果表明,LLMs在识别主张-文章关联和验证经过事实核查的报道方面表现良好,但在确认事实性新闻方面存在困难,其表现不及传统的微调模型(如RoBERTa)。此外,引入外部知识并未显著提升LLMs的性能,这表明需要更具针对性的方法。我们的研究结果凸显了LLMs在自动化事实核查中的潜力与局限性,强调在它们能够可靠替代人工核查员之前仍需进一步改进。