While LLMs exhibit remarkable fluency, their utility is often compromised by factual hallucinations and a lack of traceable provenance. Existing resources for grounding mitigate this but typically enforce a dichotomy: they offer either structured knowledge without textual context (e.g., knowledge bases) or grounded text with limited scale and linguistic coverage. To bridge this gap, we introduce FactNet, a massive, open-source resource designed to unify 1.7 billion atomic assertions with 3.01 billion auditable evidence pointers derived exclusively from 316 Wikipedia editions. Unlike recent synthetic approaches, FactNet employs a strictly deterministic construction pipeline, ensuring that every evidence unit is recoverable with byte-level precision. Extensive auditing confirms a high grounding precision of 92.1%, even in long-tail languages. Furthermore, we establish FactNet-Bench, a comprehensive evaluation suite for Knowledge Graph Completion, Question Answering, and Fact Checking. FactNet provides the community with a foundational, reproducible resource for training and evaluating trustworthy, verifiable multilingual systems.
翻译:尽管大语言模型展现出卓越的流畅性,但其实际应用常因事实性幻觉和缺乏可追溯来源而受到限制。现有的基础性资源虽能缓解此问题,但通常存在二元对立:它们要么提供缺乏文本上下文的结构化知识(如知识库),要么提供规模有限且语言覆盖不足的锚定文本。为弥合这一鸿沟,我们推出了FactNet——一个大规模开源资源,旨在统一17亿个原子断言与30.1亿个可审计的证据指针,所有数据均严格源自316个维基百科版本。与近期采用的合成方法不同,FactNet采用完全确定性的构建流程,确保每个证据单元均可实现字节级精度的可复现性。大规模审计证实其即使在长尾语言中也具有92.1%的高基础精度。此外,我们建立了FactNet-Bench,这是一个涵盖知识图谱补全、问答系统与事实核查的综合评估套件。FactNet为学术界提供了一个可用于训练和评估可信赖、可验证多语言系统的基础性、可复现资源。