Large Language Models (LLMs) excel in many natural language processing tasks but often exhibit factual inconsistencies in knowledge-intensive settings. Integrating external knowledge resources, particularly knowledge graphs (KGs), provides a transparent and updatable foundation for more reliable reasoning. Knowledge Base Question Answering (KBQA), which queries and reasons over KGs, is central to this effort, especially for complex, multi-hop queries. However, multi-hop reasoning poses two key challenges: (1)~maintaining coherent reasoning paths, and (2)~avoiding prematurely discarding critical multi-hop connections. To tackle these challenges, we introduce iQUEST, a question-guided KBQA framework that iteratively decomposes complex queries into simpler sub-questions, ensuring a structured and focused reasoning trajectory. Additionally, we integrate a Graph Neural Network (GNN) to look ahead and incorporate 2-hop neighbor information at each reasoning step. This dual approach strengthens the reasoning process, enabling the model to explore viable paths more effectively. Detailed experiments demonstrate the consistent improvement delivered by iQUEST across four benchmark datasets and four LLMs. The code is publicly available at: https://github.com/Wangshuaiia/iQUEST.
翻译:大型语言模型(LLM)在众多自然语言处理任务中表现出色,但在知识密集型场景中常出现事实不一致问题。整合外部知识资源(特别是知识图谱)为更可靠的推理提供了透明且可更新的基础。知识库问答(KBQA)作为对知识图谱进行查询与推理的核心技术,对于处理复杂的多跳查询尤为重要。然而,多跳推理面临两大关键挑战:(1)保持连贯的推理路径,(2)避免过早丢弃关键的多跳关联。为应对这些挑战,我们提出了iQUEST——一种问题引导的KBQA框架,通过迭代方式将复杂查询分解为更简单的子问题,从而确保结构化且聚焦的推理轨迹。此外,我们引入图神经网络(GNN)进行前瞻性推理,在每一步推理中整合2跳邻域信息。这种双重策略强化了推理过程,使模型能更有效地探索可行路径。详尽的实验表明,iQUEST在四个基准数据集和四种LLM上均实现了持续的性能提升。代码已公开于:https://github.com/Wangshuaiia/iQUEST。