Hallucination remains one of the key obstacles to the reliable deployment of large language models (LLMs), particularly in real-world applications. Among various mitigation strategies, Retrieval-Augmented Generation (RAG) and reasoning enhancement have emerged as two of the most effective and widely adopted approaches, marking a shift from merely suppressing hallucinations to balancing creativity and reliability. However, their synergistic potential and underlying mechanisms for hallucination mitigation have not yet been systematically examined. This survey adopts an application-oriented perspective of capability enhancement to analyze how RAG, reasoning enhancement, and their integration in Agentic Systems mitigate hallucinations. We propose a taxonomy distinguishing knowledge-based and logic-based hallucinations, systematically examine how RAG and reasoning address each, and present a unified framework supported by real-world applications, evaluations, and benchmarks.
翻译:幻觉仍然是阻碍大型语言模型(LLMs)可靠部署的关键障碍之一,尤其是在实际应用中。在多种缓解策略中,检索增强生成(RAG)与推理增强已成为最有效且广泛采用的两类方法,标志着从单纯抑制幻觉转向平衡创造性与可靠性的转变。然而,它们在缓解幻觉方面的协同潜力与内在机制尚未得到系统研究。本综述采用面向应用的性能增强视角,分析RAG、推理增强及其在智能体系统中的整合如何缓解幻觉。我们提出了一种区分基于知识的幻觉与基于逻辑的幻觉的分类法,系统考察了RAG与推理如何分别应对这两类问题,并提出了一个由实际应用、评估与基准测试支持的统一框架。