The dissemination of false information across online platforms poses a serious societal challenge, necessitating robust measures for information verification. While manual fact-checking efforts are still instrumental, the growing volume of false information requires automated methods. Large language models (LLMs) offer promising opportunities to assist fact-checkers, leveraging LLM's extensive knowledge and robust reasoning capabilities. In this survey paper, we investigate the utilization of generative LLMs in the realm of fact-checking, illustrating various approaches that have been employed and techniques for prompting or fine-tuning LLMs. By providing an overview of existing approaches, this survey aims to improve the understanding of utilizing LLMs in fact-checking and to facilitate further progress in LLMs' involvement in this process.
翻译:在线平台上虚假信息的传播构成了严峻的社会挑战,亟需建立有效的信息验证机制。尽管人工事实核查工作仍不可或缺,但日益增长的虚假信息体量要求采用自动化方法。大语言模型凭借其广泛的知识储备和强大的推理能力,为辅助事实核查工作提供了广阔前景。本综述论文探讨了生成式大语言模型在事实核查领域的应用,阐述了已采用的各种方法以及提示工程或微调大语言模型的技术。通过对现有研究路径的系统梳理,本综述旨在深化对大语言模型在事实核查中应用的理解,并推动该领域研究取得进一步进展。