Recent work integrating Large Language Models (LLMs) has led to significant improvements in the Knowledge Base Question Answering (KBQA) task. However, we posit that existing KBQA datasets that either have simple questions, use synthetically generated logical forms, or are based on small knowledge base (KB) schemas, do not capture the true complexity of KBQA tasks. To address this, we introduce the SPINACH dataset, an expert-annotated KBQA dataset collected from forum discussions on Wikidata's "Request a Query" forum with 320 decontextualized question-SPARQL pairs. Much more complex than existing datasets, SPINACH calls for strong KBQA systems that do not rely on training data to learn the KB schema, but can dynamically explore large and often incomplete schemas and reason about them. Along with the dataset, we introduce the SPINACH agent, a new KBQA approach that mimics how a human expert would write SPARQLs for such challenging questions. Experiments on existing datasets show SPINACH's capability in KBQA, achieving a new state of the art on the QALD-7, QALD-9 Plus and QALD-10 datasets by 30.1%, 27.0%, and 10.0% in F1, respectively, and coming within 1.6% of the fine-tuned LLaMA SOTA model on WikiWebQuestions. On our new SPINACH dataset, SPINACH agent outperforms all baselines, including the best GPT-4-based KBQA agent, by 38.1% in F1.
翻译:近期将大型语言模型(LLMs)整合到知识库问答(KBQA)任务的研究取得了显著进展。然而,我们认为现有的KBQA数据集要么包含简单问题,要么使用合成生成的逻辑形式,要么基于小型知识库(KB)模式,未能捕捉KBQA任务真正的复杂性。为解决这一问题,我们提出了SPINACH数据集——一个专家标注的KBQA数据集,该数据集从维基数据“请求查询”论坛的讨论中收集,包含320个去语境化的问题-SPARQL对。相比现有数据集,SPINACH的复杂性显著提高,它要求强大的KBQA系统能够不依赖训练数据学习KB模式,而是能动态探索庞大且通常不完整的模式并对其进行推理。除了数据集,我们还提出了SPINACH智能体,这是一种新的KBQA方法,它模拟人类专家为此类复杂问题编写SPARQL查询的方式。在现有数据集上的实验证明了SPINACH在KBQA任务中的能力:在QALD-7、QALD-9 Plus和QALD-10数据集上分别以30.1%、27.0%和10.0%的F1分数刷新了最优性能记录,并在WikiWebQuestions数据集上仅比微调LLaMA最优模型低1.6%。在我们新的SPINACH数据集上,SPINACH智能体以38.1%的F1分数优势超越了所有基线模型,包括基于GPT-4的最佳KBQA智能体。