Pre-trained language models (PLMs) leverage chains-of-thought (CoT) to simulate human reasoning and inference processes, achieving proficient performance in multi-hop QA. However, a gap persists between PLMs' reasoning abilities and those of humans when tackling complex problems. Psychological studies suggest a vital connection between explicit information in passages and human prior knowledge during reading. Nevertheless, current research has given insufficient attention to linking input passages and PLMs' pre-training-based knowledge from the perspective of human cognition studies. In this study, we introduce a Prompting Explicit and Implicit knowledge (PEI) framework, which uses prompts to connect explicit and implicit knowledge, aligning with human reading process for multi-hop QA. We consider the input passages as explicit knowledge, employing them to elicit implicit knowledge through unified prompt reasoning. Furthermore, our model incorporates type-specific reasoning via prompts, a form of implicit knowledge. Experimental results show that PEI performs comparably to the state-of-the-art on HotpotQA. Ablation studies confirm the efficacy of our model in bridging and integrating explicit and implicit knowledge.
翻译:预训练语言模型利用思维链模拟人类推理过程,在多跳问答任务中展现出优异性能。然而,在处理复杂问题时,预训练语言模型的推理能力与人类仍存在差距。心理学研究表明,阅读过程中篇章的显性信息与人类先验知识之间存在重要关联。但现有研究尚未从人类认知角度充分关注输入篇章与预训练语言模型基于预训练获得的知识之间的关联。本研究提出显式与隐式知识提示框架,通过提示连接显式与隐式知识,使其与人类阅读过程相契合以解决多跳问答任务。我们将输入篇章视为显式知识,通过统一提示推理机制激发隐式知识。此外,模型通过提示融入类型化推理——这是一种隐式知识形式。实验结果表明,该框架在HotpotQA数据集上的性能与当前最优模型相当。消融研究证实了该模型在桥接与融合显式及隐式知识方面的有效性。