Large language models (LLMs) based on generative pre-trained Transformer have achieved remarkable performance on knowledge graph question-answering (KGQA) tasks. However, LLMs often produce ungrounded subgraph planning or reasoning results in KGQA due to the hallucinatory behavior brought by the generative paradigm, which may hinder the advancement of the LLM-based KGQA model. To deal with the issue, we propose a novel LLM-based Discriminative Reasoning (LDR) method to explicitly model the subgraph retrieval and answer inference process. By adopting discriminative strategies, the proposed LDR method not only enhances the capability of LLMs to retrieve question-related subgraphs but also alleviates the issue of ungrounded reasoning brought by the generative paradigm of LLMs. Experimental results show that the proposed approach outperforms multiple strong comparison methods, along with achieving state-of-the-art performance on two widely used WebQSP and CWQ benchmarks.
翻译:基于生成式预训练Transformer的大语言模型在知识图谱问答任务中取得了显著性能。然而,由于生成范式带来的幻觉行为,大语言模型在知识图谱问答中常产生缺乏依据的子图规划或推理结果,这可能阻碍基于大语言模型的知识图谱问答模型的发展。为解决该问题,我们提出一种新颖的基于大语言模型的判别式推理方法,以显式建模子图检索与答案推断过程。通过采用判别式策略,所提出的LDR方法不仅增强了大语言模型检索问题相关子图的能力,同时缓解了由大语言模型生成范式带来的无依据推理问题。实验结果表明,所提方法在WebQSP和CWQ两个广泛使用的基准测试中超越了多个强对比方法,并取得了最先进的性能。