Since large language models (LLMs) achieve significant success in recent years, the hallucination issue remains a challenge, numerous benchmarks are proposed to detect the hallucination. Nevertheless, some of these benchmarks are not naturally generated by LLMs but are intentionally induced. Also, many merely focus on the factuality hallucination while ignoring the faithfulness hallucination. Additionally, although dialogue pattern is more widely utilized in the era of LLMs, current benchmarks only concentrate on sentence-level and passage-level hallucination. In this study, we propose DiaHalu, the first dialogue-level hallucination evaluation benchmark to our knowledge. Initially, we integrate the collected topics into system prompts and facilitate a dialogue between two ChatGPT3.5. Subsequently, we manually modify the contents that do not adhere to human language conventions and then have LLMs re-generate, simulating authentic human-machine interaction scenarios. Finally, professional scholars annotate all the samples in the dataset. DiaHalu covers four common multi-turn dialogue domains and five hallucination subtypes, extended from factuality and faithfulness hallucination. Experiments through some well-known LLMs and detection methods on the dataset show that DiaHalu is a challenging benchmark, holding significant value for further research.
翻译:论文摘要:尽管大语言模型(LLMs)近年来取得了显著成功,但幻觉问题依然是一个挑战。目前已有众多基准被提出用于检测幻觉现象。然而,其中部分基准并非由LLM自然生成,而是人为诱导产生的。同时,许多基准仅关注事实性幻觉,忽视了忠实性幻觉。此外,尽管对话模式在LLM时代得到更广泛应用,现有基准却仅聚焦于句子级和段落级幻觉。本研究提出DiaHalu——据我们所知首个对话级幻觉评估基准。我们首先将收集的主题整合到系统提示中,并通过两个ChatGPT3.5模型进行对话生成。随后人工修正不符合人类语言习惯的内容,再由LLM重新生成,以模拟真实人机交互场景。最后邀请专业学者对数据集中所有样本进行标注。DiaHalu涵盖四种常见多轮对话领域及五种幻觉子类型(从事实性幻觉和忠实性幻觉扩展而来)。通过在数据集上对知名LLM和检测方法的实验表明,DiaHalu是一个具有挑战性的基准,对后续研究具有重要价值。