Answering first-order logic (FOL) queries over incomplete knowledge graphs (KGs) is difficult, especially for complex query structures that compose projection, intersection, union, and negation. We propose ROG, a retrieval-augmented framework that combines query-aware neighborhood retrieval with large language model (LLM) chain-of-thought reasoning. ROG decomposes a multi-operator query into a sequence of single-operator sub-queries and grounds each step in compact, query-relevant neighborhood evidence. Intermediate answer sets are cached and reused across steps, improving consistency on deep reasoning chains. This design reduces compounding errors and yields more robust inference on complex and negation-heavy queries. Overall, ROG provides a practical alternative to embedding-based logical reasoning by replacing learned operators with retrieval-grounded, step-wise inference. Experiments on standard KG reasoning benchmarks show consistent gains over strong embedding-based baselines, with the largest improvements on high-complexity and negation-heavy query types.
翻译:在不完整知识图谱上回答一阶逻辑查询具有挑战性,尤其对于包含投影、交集、并集和否定运算的复杂查询结构。我们提出了ROG,一个检索增强型框架,它将查询感知的邻域检索与大语言模型的思维链推理相结合。ROG将多运算符查询分解为一系列单运算符子查询,并将每个推理步骤建立在紧凑的、与查询相关的邻域证据之上。中间答案集在推理步骤间被缓存和复用,从而提升了深度推理链的一致性。该设计减少了误差累积,并在复杂及否定密集的查询上实现了更稳健的推断。总体而言,ROG通过用基于检索的逐步推理替代学习型运算符,为基于嵌入的逻辑推理提供了一种实用的替代方案。在标准知识图谱推理基准上的实验表明,ROG相较于强大的基于嵌入的基线模型取得了持续的性能提升,在高度复杂和否定密集的查询类型上提升最为显著。