This survey provides an in-depth analysis of knowledge conflicts for large language models (LLMs), highlighting the complex challenges they encounter when blending contextual and parametric knowledge. Our focus is on three categories of knowledge conflicts: context-memory, inter-context, and intra-memory conflict. These conflicts can significantly impact the trustworthiness and performance of LLMs, especially in real-world applications where noise and misinformation are common. By categorizing these conflicts, exploring the causes, examining the behaviors of LLMs under such conflicts, and reviewing available solutions, this survey aims to shed light on strategies for improving the robustness of LLMs, thereby serving as a valuable resource for advancing research in this evolving area.
翻译:本综述对大语言模型(LLMs)的知识冲突进行了深入分析,重点探讨了其在融合上下文知识与参数化知识时所面临的复杂挑战。我们聚焦于三类知识冲突:上下文-记忆冲突、上下文间冲突以及记忆内冲突。这些冲突会显著影响大语言模型的可信度与性能,尤其是在噪声和错误信息普遍存在的现实应用中。通过分类这些冲突、探究其成因、检视大语言模型在此类冲突下的行为表现,并回顾现有解决方案,本综述旨在阐明提升大语言模型鲁棒性的策略,从而为推动这一不断发展的领域的研究提供有价值的资源。