This paper describes our $3^{rd}$ place submission in the AVeriTeC shared task in which we attempted to address the challenge of fact-checking with evidence retrieved in the wild using a simple scheme of Retrieval-Augmented Generation (RAG) designed for the task, leveraging the predictive power of Large Language Models. We release our codebase and explain its two modules - the Retriever and the Evidence & Label generator - in detail, justifying their features such as MMR-reranking and Likert-scale confidence estimation. We evaluate our solution on AVeriTeC dev and test set and interpret the results, picking the GPT-4o as the most appropriate model for our pipeline at the time of our publication, with Llama 3.1 70B being a promising open-source alternative. We perform an empirical error analysis to see that faults in our predictions often coincide with noise in the data or ambiguous fact-checks, provoking further research and data augmentation.
翻译:本文描述了我们在AVeriTeC共享任务中获得第三名的参赛方案。我们尝试通过一种为该任务设计的简单检索增强生成(RAG)框架,利用大语言模型的预测能力,来解决基于开放域证据检索的事实核查难题。我们公开了代码库,并详细解释了其两个核心模块——检索器与证据及标签生成器,同时论证了最大边际相关性重排序和利克特量表置信度估计等特性的设计依据。我们在AVeriTeC开发集和测试集上评估了解决方案并解读结果,最终选择GPT-4o作为当前流水线最适配的模型,而Llama 3.1 70B则被视为具有潜力的开源替代方案。通过实证错误分析发现,预测错误常与数据噪声或模糊的事实核查表述相关,这为后续研究及数据增强提供了改进方向。