Methods to evaluate Large Language Model (LLM) responses and detect inconsistencies, also known as hallucinations, with respect to the provided knowledge, are becoming increasingly important for LLM applications. Current metrics fall short in their ability to provide explainable decisions, systematically check all pieces of information in the response, and are often too computationally expensive to be used in practice. We present GraphEval: a hallucination evaluation framework based on representing information in Knowledge Graph (KG) structures. Our method identifies the specific triples in the KG that are prone to hallucinations and hence provides more insight into where in the response a hallucination has occurred, if at all, than previous methods. Furthermore, using our approach in conjunction with state-of-the-art natural language inference (NLI) models leads to an improvement in balanced accuracy on various hallucination benchmarks, compared to using the raw NLI models. Lastly, we explore the use of GraphEval for hallucination correction by leveraging the structure of the KG, a method we name GraphCorrect, and demonstrate that the majority of hallucinations can indeed be rectified.
翻译:评估大语言模型(LLM)响应并检测其与所提供知识之间不一致性(亦称为幻觉)的方法,对于LLM应用正变得日益重要。现有指标在提供可解释的决策、系统化检查响应中所有信息片段的能力方面存在不足,且通常计算成本过高而难以实际应用。我们提出了GraphEval:一种基于知识图谱(KG)结构表示信息的幻觉评估框架。我们的方法能识别知识图谱中易产生幻觉的特定三元组,从而比以往方法更清晰地揭示幻觉在响应中的具体发生位置(如果存在的话)。此外,将我们的方法与最先进的自然语言推理(NLI)模型结合使用,相较于直接使用原始NLI模型,能在多种幻觉基准测试中提升平衡准确率。最后,我们探索了利用知识图谱结构进行幻觉校正的方法(我们将其命名为GraphCorrect),并证明大多数幻觉确实能够被修正。