Large Language Models (LLMs) have revolutionized Natural Language Processing (NLP) based applications including automated text generation, question answering, chatbots, and others. However, they face a significant challenge: hallucinations, where models produce plausible-sounding but factually incorrect responses. This undermines trust and limits the applicability of LLMs in different domains. Knowledge Graphs (KGs), on the other hand, provide a structured collection of interconnected facts represented as entities (nodes) and their relationships (edges). In recent research, KGs have been leveraged to provide context that can fill gaps in an LLM understanding of certain topics offering a promising approach to mitigate hallucinations in LLMs, enhancing their reliability and accuracy while benefiting from their wide applicability. Nonetheless, it is still a very active area of research with various unresolved open problems. In this paper, we discuss these open challenges covering state-of-the-art datasets and benchmarks as well as methods for knowledge integration and evaluating hallucinations. In our discussion, we consider the current use of KGs in LLM systems and identify future directions within each of these challenges.
翻译:大语言模型(LLMs)已彻底改变了包括自动文本生成、问答系统、聊天机器人在内的自然语言处理(NLP)应用。然而,它们面临着一个重大挑战:幻觉,即模型生成听起来合理但事实错误的回答。这损害了信任,并限制了大语言模型在不同领域的适用性。另一方面,知识图谱(KGs)提供了以实体(节点)及其关系(边)形式表示的结构化互连事实集合。在最近的研究中,知识图谱被用来提供上下文,以填补大语言模型对特定主题理解上的空白,这为缓解大语言模型的幻觉提供了一种有前景的方法,在受益于其广泛适用性的同时,增强了其可靠性和准确性。尽管如此,这仍然是一个非常活跃的研究领域,存在各种尚未解决的开放性问题。在本文中,我们讨论了这些开放挑战,涵盖了最先进的数据集和基准,以及知识集成和幻觉评估的方法。在讨论中,我们考虑了知识图谱在大语言模型系统中的当前应用,并针对每个挑战指出了未来的研究方向。