To mitigate hallucinations in large language models (LLMs), we propose a framework that focuses on errors induced by prompts. Our method extends a chain-style knowledge distillation approach by incorporating a programmable module that guides knowledge graph exploration. This module is embedded as executable code within the reasoning prompt, allowing the model to leverage external structured knowledge during inference. Based on this design, we develop an enhanced distillation-based reasoning framework that explicitly regulates intermediate reasoning steps, resulting in more reliable predictions. We evaluate the proposed approach on multiple public benchmarks using GPT-4 and LLaMA-3.3. Experimental results show that code-guided reasoning significantly improves contextual modeling and reduces prompt-induced hallucinations. Specifically, HIT@1, HIT@3, and HIT@5 increase by 15.64%, 13.38%, and 13.28%, respectively, with scores exceeding 95% across several evaluation settings. These findings indicate that the proposed method effectively constrains erroneous reasoning while improving both accuracy and interpretability.
翻译:为缓解大语言模型(LLMs)中的幻觉问题,本文提出一种专注于提示诱发错误的框架。该方法扩展了链式知识蒸馏路径,通过引入可编程模块来引导知识图谱探索。该模块以可执行代码形式嵌入推理提示中,使模型能够在推理过程中利用外部结构化知识。基于此设计,我们开发了一种增强的基于蒸馏的推理框架,该框架显式调控中间推理步骤,从而产生更可靠的预测。我们在多个公开基准上使用GPT-4和LLaMA-3.3评估所提方法。实验结果表明,代码引导的推理显著提升了上下文建模能力并减少了提示诱发的幻觉。具体而言,HIT@1、HIT@3和HIT@5分别提升了15.64%、13.38%和13.28%,在多项评估设置中得分均超过95%。这些发现表明,所提方法在提升准确性与可解释性的同时,能有效约束错误推理。