Complex logical query answering (CLQA) in knowledge graphs (KGs) goes beyond simple KG completion and aims at answering compositional queries comprised of multiple projections and logical operations. Existing CLQA methods that learn parameters bound to certain entity or relation vocabularies can only be applied to the graph they are trained on which requires substantial training time before being deployed on a new graph. Here we present UltraQuery, the first foundation model for inductive reasoning that can zero-shot answer logical queries on any KG. The core idea of UltraQuery is to derive both projections and logical operations as vocabulary-independent functions which generalize to new entities and relations in any KG. With the projection operation initialized from a pre-trained inductive KG reasoning model, UltraQuery can solve CLQA on any KG after finetuning on a single dataset. Experimenting on 23 datasets, UltraQuery in the zero-shot inference mode shows competitive or better query answering performance than best available baselines and sets a new state of the art on 15 of them.
翻译:知识图谱(KG)中的复杂逻辑查询回答(CLQA)超越了简单的知识图谱补全,旨在回答由多个投影和逻辑操作组成的组合查询。现有的CLQA方法学习与特定实体或关系词汇表绑定的参数,因此只能应用于其训练所用的图谱,在新图谱上部署前需要大量训练时间。本文提出UltraQuery,首个用于归纳推理的基础模型,能够零样本地回答任何知识图谱上的逻辑查询。UltraQuery的核心思想是将投影和逻辑操作均推导为与词汇表无关的函数,从而泛化到任何知识图谱中的新实体和新关系。通过从预训练的归纳知识图谱推理模型初始化投影操作,UltraQuery仅需在单个数据集上微调即可解决任何知识图谱上的CLQA问题。在23个数据集上的实验表明,采用零样本推理模式的UltraQuery在查询回答性能上可与现有最佳基线模型竞争或更优,并在其中15个数据集上创造了新的最优性能记录。