Despite substantial progress in fact-verification benchmarks, claims grounded in large-scale structured data remain underexplored. In this work, we introduce ClaimDB, the first fact-verification benchmark where the evidence for claims is derived from compositions of millions of records and multiple tables. ClaimDB consists of 80 unique real-life databases covering a wide range of domains, from governance and healthcare to media, education and the natural sciences. At this scale, verification approaches that rely on "reading" the evidence break down, forcing a timely shift toward reasoning in executable programs. We conduct extensive experiments with 30 state-of-the-art proprietary and open-source (below 70B) LLMs and find that none exceed 83% accuracy, with more than half below 55%. Our analysis also reveals that both closed- and open-source models struggle with abstention -- the ability to admit that there is no evidence to decide -- raising doubts about their reliability in high-stakes data analysis. We release the benchmark, code, and the LLM leaderboard at https://claimdb.github.io .
翻译:尽管事实核查基准已取得显著进展,但基于大规模结构化数据的声明验证仍未被充分探索。本研究提出了ClaimDB,这是首个证据来源于数百万条记录及多表组合的事实核查基准。ClaimDB包含80个独特的现实世界数据库,涵盖治理、医疗保健、媒体、教育及自然科学等多个领域。在此规模下,依赖“阅读”证据的验证方法已失效,亟需转向可执行程序中的推理范式。我们通过对30个最先进的专有与开源(低于700亿参数)大语言模型进行广泛实验,发现所有模型的准确率均未超过83%,超过半数模型低于55%。分析还表明,闭源与开源模型均难以实现“弃权判断”——即承认缺乏决策证据的能力——这对其在高风险数据分析中的可靠性提出了质疑。我们已在https://claimdb.github.io 发布基准数据集、代码及大语言模型排行榜。