Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities but often grapple with reliability challenges like hallucinations. While Knowledge Graphs (KGs) offer explicit grounding, existing paradigms of KG-augmented LLMs typically exhibit cognitive rigidity--applying homogeneous search strategies that render them vulnerable to instability under neighborhood noise and structural misalignment leading to reasoning stagnation. To address these challenges, we propose CoG, a training-free framework inspired by Dual-Process Theory that mimics the interplay between intuition and deliberation. First, functioning as the fast, intuitive process, the Relational Blueprint Guidance module leverages relational blueprints as interpretable soft structural constraints to rapidly stabilize the search direction against noise. Second, functioning as the prudent, analytical process, the Failure-Aware Refinement module intervenes upon encountering reasoning impasses. It triggers evidence-conditioned reflection and executes controlled backtracking to overcome reasoning stagnation. Experimental results on three benchmarks demonstrate that CoG significantly outperforms state-of-the-art approaches in both accuracy and efficiency.
翻译:大型语言模型(LLMs)已展现出卓越的推理能力,但常受幻觉等可靠性问题困扰。尽管知识图谱(KGs)提供了显式的基础信息,现有基于知识图谱增强的大型语言模型范式通常表现出认知僵化——它们采用同质的搜索策略,使其在邻域噪声和结构失准时易陷入不稳定,并导致推理停滞。为应对这些挑战,我们提出CoG,一个受双过程理论启发的免训练框架,旨在模拟直觉与审慎思考的交互过程。首先,关系蓝图引导模块作为快速直觉过程,利用关系蓝图作为可解释的软结构约束,以快速稳定搜索方向并抵御噪声干扰。其次,故障感知细化模块作为审慎分析过程,在遭遇推理瓶颈时介入。该模块触发基于证据的反思,并执行受控回溯以克服推理停滞。在三个基准数据集上的实验结果表明,CoG在准确性和效率方面均显著优于现有最先进方法。