Large Language Models (LLMs) excel in many natural language processing tasks but often exhibit factual inconsistencies in knowledge-intensive settings. Integrating external knowledge resources, particularly knowledge graphs (KGs), provides a transparent and updatable foundation for more reliable reasoning. Knowledge Base Question Answering (KBQA), which queries and reasons over KGs, is central to this effort, especially for complex, multi-hop queries. However, multi-hop reasoning poses two key challenges: (1)~maintaining coherent reasoning paths, and (2)~avoiding prematurely discarding critical multi-hop connections. To tackle these challenges, we introduce iQUEST, a question-guided KBQA framework that iteratively decomposes complex queries into simpler sub-questions, ensuring a structured and focused reasoning trajectory. Additionally, we integrate a Graph Neural Network (GNN) to look ahead and incorporate 2-hop neighbor information at each reasoning step. This dual approach strengthens the reasoning process, enabling the model to explore viable paths more effectively. Detailed experiments demonstrate the consistent improvement delivered by iQUEST across four benchmark datasets and four LLMs.
翻译:大型语言模型(LLM)在众多自然语言处理任务中表现出色,但在知识密集型场景中常出现事实不一致的问题。整合外部知识资源,特别是知识图谱(KG),为更可靠的推理提供了透明且可更新的基础。知识库问答(KBQA)通过对知识图谱进行查询和推理,是这一努力的核心,尤其对于复杂的多跳查询。然而,多跳推理面临两大关键挑战:(1)保持连贯的推理路径,(2)避免过早丢弃关键的多跳连接。为应对这些挑战,我们提出了iQUEST,一种问题引导的KBQA框架,通过迭代地将复杂查询分解为更简单的子问题,确保结构化且聚焦的推理轨迹。此外,我们集成了图神经网络(GNN),以在每个推理步骤中前瞻性地纳入2跳邻居信息。这种双重方法强化了推理过程,使模型能够更有效地探索可行路径。详细的实验表明,iQUEST在四个基准数据集和四种LLM上均实现了持续的性能提升。