The dissemination of false information on online platforms presents a serious societal challenge. While manual fact-checking remains crucial, Large Language Models (LLMs) offer promising opportunities to support fact-checkers with their vast knowledge and advanced reasoning capabilities. This survey explores the application of generative LLMs in fact-checking, highlighting various approaches and techniques for prompting or fine-tuning these models. By providing an overview of existing methods and their limitations, the survey aims to enhance the understanding of how LLMs can be used in fact-checking and to facilitate further progress in their integration into the fact-checking process.
翻译:在线平台上的虚假信息传播构成了严峻的社会挑战。尽管人工事实核查仍然至关重要,但大语言模型凭借其海量知识和先进推理能力,为支持事实核查工作提供了广阔前景。本综述探讨了生成式大语言模型在事实核查中的应用,重点阐述了通过提示工程或微调技术驱动这些模型的各种方法路径。通过系统梳理现有方法及其局限性,本综述旨在深化对大语言模型在事实核查中应用机制的理解,并推动其与事实核查流程的进一步融合。