Large Language Models (LLMs) have gained widespread adoption in various natural language processing tasks, including question answering and dialogue systems. However, a major drawback of LLMs is the issue of hallucination, where they generate unfaithful or inconsistent content that deviates from the input source, leading to severe consequences. In this paper, we propose a robust discriminator named RelD to effectively detect hallucination in LLMs' generated answers. RelD is trained on the constructed RelQA, a bilingual question-answering dialogue dataset along with answers generated by LLMs and a comprehensive set of metrics. Our experimental results demonstrate that the proposed RelD successfully detects hallucination in the answers generated by diverse LLMs. Moreover, it performs well in distinguishing hallucination in LLMs' generated answers from both in-distribution and out-of-distribution datasets. Additionally, we also conduct a thorough analysis of the types of hallucinations that occur and present valuable insights. This research significantly contributes to the detection of reliable answers generated by LLMs and holds noteworthy implications for mitigating hallucination in the future work.
翻译:大语言模型(LLMs)已在各种自然语言处理任务中得到广泛应用,包括问答和对话系统。然而,LLMs的一个主要缺陷是幻觉问题,即它们生成不忠实或不一致的内容,偏离输入源,从而导致严重后果。本文提出了一种名为RelD的稳健判别器,以有效检测LLMs生成答案中的幻觉。RelD在构建的RelQA数据集上进行训练,该数据集是一个双语问答对话数据集,包含LLMs生成的答案以及一套全面的评估指标。我们的实验结果表明,所提出的RelD成功检测了多种LLMs生成答案中的幻觉。此外,它在区分LLMs生成答案中的幻觉方面表现良好,无论是针对分布内还是分布外数据集。此外,我们还对发生的幻觉类型进行了深入分析,并提出了有价值的见解。这项研究对检测LLMs生成的可靠答案做出了重要贡献,并对未来工作中缓解幻觉问题具有显著意义。