Large language models (LLMs) typically improve performance by either retrieving semantically similar information, or enhancing reasoning abilities through structured prompts like chain-of-thought. While both strategies are considered crucial, it remains unclear which has a greater impact on model performance or whether a combination of both is necessary. This paper answers this question by proposing a knowledge graph (KG)-based random-walk reasoning approach that leverages causal relationships. We conduct experiments on the commonsense question answering task that is based on a KG. The KG inherently provides both relevant information, such as related entity keywords, and a reasoning structure through the connections between nodes. Experimental results show that the proposed KG-based random-walk reasoning method improves the reasoning ability and performance of LLMs. Interestingly, incorporating three seemingly irrelevant sentences into the query using KG-based random-walk reasoning enhances LLM performance, contrary to conventional wisdom. These findings suggest that integrating causal structures into prompts can significantly improve reasoning capabilities, providing new insights into the role of causality in optimizing LLM performance.
翻译:大语言模型(LLMs)通常通过两种方式提升性能:一是检索语义相似的信息,二是通过如思维链等结构化提示来增强推理能力。尽管这两种策略都被认为是至关重要的,但目前尚不清楚哪种策略对模型性能影响更大,或者是否需要将两者结合使用。本文通过提出一种基于知识图谱(KG)的随机游走推理方法,利用因果关系回答了这一问题。我们在基于知识图谱的常识问答任务上进行了实验。知识图谱本身既提供了相关信息(如相关实体关键词),也通过节点间的连接提供了推理结构。实验结果表明,所提出的基于知识图谱的随机游走推理方法提升了大语言模型的推理能力和性能。有趣的是,与常规认知相反,使用基于知识图谱的随机游走推理将三个看似无关的句子纳入查询中,反而增强了大语言模型的性能。这些发现表明,将因果结构整合到提示中可以显著提升推理能力,为理解因果性在优化大语言模型性能中的作用提供了新的见解。