Large Language Models (LLMs) often struggle with tasks requiring external knowledge, such as knowledge-intensive Multiple Choice Question Answering (MCQA). Integrating Knowledge Graphs (KGs) can enhance reasoning; however, existing methods typically demand costly fine-tuning or retrieve noisy KG information. Recent approaches leverage Graph Neural Networks (GNNs) to generate KG-based input embedding prefixes as soft prompts for LLMs but fail to account for question relevance, resulting in noisy prompts. Moreover, in MCQA tasks, the absence of relevant KG knowledge for certain answer options remains a significant challenge. To address these issues, we propose Question-Aware Knowledge Graph Prompting (QAP), which incorporates question embeddings into GNN aggregation to dynamically assess KG relevance. QAP employs global attention to capture inter-option relationships, enriching soft prompts with inferred knowledge. Experimental results demonstrate that QAP outperforms state-of-the-art methods across multiple datasets, highlighting its effectiveness.
翻译:大语言模型(LLMs)在处理需要外部知识的任务(如知识密集型多项选择问答(MCQA))时常常面临困难。集成知识图谱(KGs)可以增强推理能力;然而,现有方法通常需要昂贵的微调或检索到噪声较多的KG信息。近期研究利用图神经网络(GNNs)生成基于KG的输入嵌入前缀作为LLMs的软提示,但未能考虑问题相关性,导致提示存在噪声。此外,在MCQA任务中,某些答案选项缺乏相关KG知识仍然是一个重大挑战。为解决这些问题,我们提出了问题感知知识图谱提示(QAP),该方法将问题嵌入整合到GNN聚合中,以动态评估KG相关性。QAP采用全局注意力机制捕捉选项间关系,通过推断知识丰富软提示。实验结果表明,QAP在多个数据集上优于现有最先进方法,凸显了其有效性。