Large Language Models (LLMs) have demonstrated strong performance in handling complex tasks requiring both extensive knowledge and reasoning abilities. However, the existing LLM inference pipeline operates as an opaque process without explicit separation between knowledge retrieval and reasoning steps, making the model's decision-making process unclear and disorganized. This ambiguity can lead to issues such as hallucinations and knowledge forgetting, which significantly impact the reliability of LLMs in high-stakes domains. In this paper, we propose a new inference paradigm that decomposes the complex inference process into two distinct and clear actions: (1) memory recall: which retrieves relevant knowledge, and (2) reasoning: which performs logical steps based on the recalled knowledge. To facilitate this decomposition, we introduce two special tokens memory and reason, guiding the model to distinguish between steps that require knowledge retrieval and those that involve reasoning. Our experiment results show that this decomposition not only improves model performance but also enhances the interpretability of the inference process, enabling users to identify sources of error and refine model responses effectively. The code is available at https://github.com/MingyuJ666/Disentangling-Memory-and-Reasoning.
翻译:大型语言模型(LLMs)在处理需要广泛知识和推理能力的复杂任务时展现出强大性能。然而,现有的LLM推理流程作为一个不透明的过程运行,没有明确区分知识检索与推理步骤,使得模型的决策过程不清晰且缺乏条理。这种模糊性可能导致幻觉和知识遗忘等问题,严重影响LLMs在高风险领域中的可靠性。本文提出一种新的推理范式,将复杂推理过程分解为两个清晰独立的动作:(1)记忆召回:检索相关知识;(2)推理:基于召回的知识执行逻辑步骤。为实现这种分解,我们引入两个特殊标记memory和reason,引导模型区分需要知识检索的步骤和涉及推理的步骤。实验结果表明,这种分解不仅提升了模型性能,还增强了推理过程的可解释性,使用户能够有效识别错误来源并优化模型响应。代码发布于https://github.com/MingyuJ666/Disentangling-Memory-and-Reasoning。