The emergence of large language models (LLMs) has significantly advanced the development of natural language processing (NLP), especially in text generation tasks like question answering. However, model hallucinations remain a major challenge in natural language generation (NLG) tasks due to their complex causes. We systematically expand on the causes of factual hallucinations from the perspective of knowledge shortcuts, analyzing hallucinations arising from correct and defect-free data and demonstrating that knowledge-shortcut hallucinations are prevalent in generative models. To mitigate this issue, we propose a high similarity pruning algorithm at the data preprocessing level to reduce spurious correlations in the data. Additionally, we design a specific detection method for knowledge-shortcut hallucinations to evaluate the effectiveness of our mitigation strategy. Experimental results show that our approach effectively reduces knowledge-shortcut hallucinations, particularly in fine-tuning tasks, without negatively impacting model performance in question answering. This work introduces a new paradigm for mitigating specific hallucination issues in generative models, enhancing their robustness and reliability in real-world applications.
翻译:大型语言模型(LLMs)的出现显著推动了自然语言处理(NLP)的发展,尤其在问答等文本生成任务中。然而,由于成因复杂,模型幻觉仍是自然语言生成(NLG)任务中的主要挑战。我们从知识捷径的视角系统性地拓展了事实幻觉的成因,分析了源自正确且无缺陷数据的幻觉,并证明知识捷径幻觉在生成模型中普遍存在。为缓解此问题,我们在数据预处理层面提出了一种高相似度剪枝算法,以降低数据中的虚假相关性。此外,我们设计了一种针对知识捷径幻觉的专用检测方法,以评估缓解策略的有效性。实验结果表明,我们的方法能有效减少知识捷径幻觉,尤其在微调任务中,且不会对模型的问答性能产生负面影响。这项工作为缓解生成模型中的特定幻觉问题引入了新范式,增强了其在实际应用中的鲁棒性与可靠性。