Large language models (LLMs) have demonstrated remarkable success across a wide range of tasks; however, they still encounter challenges in reasoning tasks that require understanding and inferring relationships between distinct pieces of information within text sequences. This challenge is particularly pronounced in tasks involving multi-step processes, such as logical reasoning and multi-hop question answering, where understanding implicit relationships between entities and leveraging multi-hop connections in the given context are crucial. Graphs, as fundamental data structures, explicitly represent pairwise relationships between entities, thereby offering the potential to enhance LLMs' reasoning capabilities. External graphs have proven effective in supporting LLMs across multiple tasks. However, in many reasoning tasks, no pre-existing graph structure is provided. Can we structure implicit knowledge derived from context into graphs to assist LLMs in reasoning? In this paper, we propose Reasoning with Graphs (RwG) by first constructing explicit graphs from the context and then leveraging these graphs to enhance LLM reasoning performance on reasoning tasks. Extensive experiments demonstrate the effectiveness of the proposed method in improving both logical reasoning and multi-hop question answering tasks.
翻译:大语言模型(LLMs)已在众多任务中展现出卓越性能,但在需要理解并推断文本序列中不同信息间关系的推理任务中仍面临挑战。这一挑战在涉及多步处理的任务中尤为突出,例如逻辑推理与多跳问答,其中理解实体间的隐含关系并利用给定上下文中的多跳连接至关重要。图作为基础数据结构,能够显式表征实体间的成对关系,从而为增强LLMs的推理能力提供了可能。外部图结构已被证实在支持LLMs完成多种任务方面具有显著效果。然而,在许多推理任务中,并未提供预先构建的图结构。我们能否将上下文衍生的隐式知识结构化构建为图,以辅助LLMs进行推理?本文提出基于图结构的推理方法,首先从上下文中构建显式图,进而利用这些图提升LLMs在推理任务中的性能。大量实验表明,所提方法在逻辑推理与多跳问答任务中均能有效提升推理性能。