The rapid proliferation of misinformation across online platforms underscores the urgent need for robust, up-to-date, explainable, and multilingual fact-checking resources. However, existing datasets are limited in scope, often lacking multimodal evidence, structured annotations, and detailed links between claims, evidence, and verdicts. This paper introduces a comprehensive data collection and processing pipeline that constructs multimodal fact-checking datasets in French and German languages by aggregating ClaimReview feeds, scraping full debunking articles, normalizing heterogeneous claim verdicts, and enriching them with structured metadata and aligned visual content. We used state-of-the-art large language models (LLMs) and multimodal LLMs for (i) evidence extraction under predefined evidence categories and (ii) justification generation that links evidence to verdicts. Evaluation with G-Eval and human assessment demonstrates that our pipeline enables fine-grained comparison of fact-checking practices across different organizations or media markets, facilitates the development of more interpretable and evidence-grounded fact-checking models, and lays the groundwork for future research on multilingual, multimodal misinformation verification.
翻译:在线平台中虚假信息的迅速扩散凸显了对强大、最新、可解释且多语言的事实核查资源的迫切需求。然而,现有数据集在范围上存在局限,通常缺乏多模态证据、结构化标注以及声明、证据与核查结论之间的详细关联。本文介绍了一个全面的数据收集与处理流程,该流程通过聚合ClaimReview源、抓取完整的辟谣文章、规范化异构的声明核查结论,并用结构化元数据和对齐的视觉内容进行丰富,构建了法语和德语的多模态事实核查数据集。我们使用最先进的大型语言模型(LLMs)和多模态LLMs进行(i)在预定义证据类别下的证据提取,以及(ii)生成将证据与核查结论相关联的论证。通过G-Eval和人工评估表明,我们的流程能够对不同组织或媒体市场的事实核查实践进行细粒度比较,有助于开发更具可解释性和证据基础的事实核查模型,并为未来多语言、多模态虚假信息验证的研究奠定基础。