The hallucination problem of Large Language Models (LLMs) significantly limits their reliability and trustworthiness. Humans have a self-awareness process that allows us to recognize what we don't know when faced with queries. Inspired by this, our paper investigates whether LLMs can estimate their own hallucination risk before response generation. We analyze the internal mechanisms of LLMs broadly both in terms of training data sources and across 15 diverse Natural Language Generation (NLG) tasks, spanning over 700 datasets. Our empirical analysis reveals two key insights: (1) LLM internal states indicate whether they have seen the query in training data or not; and (2) LLM internal states show they are likely to hallucinate or not regarding the query. Our study explores particular neurons, activation layers, and tokens that play a crucial role in the LLM perception of uncertainty and hallucination risk. By a probing estimator, we leverage LLM self-assessment, achieving an average hallucination estimation accuracy of 84.32\% at run time.
翻译:大语言模型(LLM)的幻觉问题严重限制了其可靠性与可信度。人类具备一种自我觉察过程,使我们能够在面对查询时认识到自身知识的局限。受此启发,本文探究了LLM能否在生成回复前预估自身的幻觉风险。我们从训练数据来源和任务类型两个维度,广泛分析了LLM的内部机制,覆盖15种不同的自然语言生成(NLG)任务,涉及超过700个数据集。实证分析揭示了两项关键发现:(1)LLM内部状态能够指示其是否在训练数据中见过当前查询;(2)LLM内部状态能够显示其针对该查询是否可能产生幻觉。本研究进一步识别了在LLM感知不确定性与幻觉风险中起关键作用的特定神经元、激活层和词元。通过构建探测估计器,我们利用LLM的自我评估能力,在运行时实现了平均84.32%的幻觉估计准确率。