When asked to summarize articles or answer questions given a passage, large language models (LLMs) can hallucinate details and respond with unsubstantiated answers that are inaccurate with respect to the input context. This paper describes a simple approach for detecting such contextual hallucinations. We hypothesize that contextual hallucinations are related to the extent to which an LLM attends to information in the provided context versus its own generations. Based on this intuition, we propose a simple hallucination detection model whose input features are given by the ratio of attention weights on the context versus newly generated tokens (for each attention head). We find that a linear classifier based on these lookback ratio features is as effective as a richer detector that utilizes the entire hidden states of an LLM or a text-based entailment model. The lookback ratio-based detector -- Lookback Lens -- is found to transfer across tasks and even models, allowing a detector that is trained on a 7B model to be applied (without retraining) to a larger 13B model. We further apply this detector to mitigate contextual hallucinations, and find that a simple classifier-guided decoding approach is able to reduce the amount of hallucination, for example by 9.6% in the XSum summarization task.
翻译:当被要求总结文章或根据给定段落回答问题时,大语言模型(LLMs)可能产生细节幻觉,并输出相对于输入上下文缺乏依据的不准确答案。本文描述了一种检测此类上下文幻觉的简单方法。我们假设上下文幻觉与大语言模型对所提供的上下文信息与自身生成内容的关注程度相关。基于这一直觉,我们提出了一种简单的幻觉检测模型,其输入特征由每个注意力头对上下文与新生成词元的注意力权重比值构成。研究发现,基于这些回溯比值特征的线性分类器,其检测效果与利用大语言模型完整隐藏状态或基于文本的蕴含模型的复杂检测器相当。这种基于回溯比值的检测器——回溯透镜——展现出跨任务甚至跨模型的迁移能力,使得在70亿参数模型上训练的检测器能够直接(无需重新训练)应用于130亿参数模型。我们进一步将该检测器应用于缓解上下文幻觉,发现简单的分类器引导解码方法能够有效减少幻觉产生,例如在XSum摘要任务中使幻觉量降低9.6%。