When asked to summarize articles or answer questions given a passage, large language models (LLMs) can hallucinate details and respond with unsubstantiated answers that are inaccurate with respect to the input context. This paper describes a simple approach for detecting such contextual hallucinations. We hypothesize that contextual hallucinations are related to the extent to which an LLM attends to information in the provided context versus its own generations. Based on this intuition, we propose a simple hallucination detection model whose input features are given by the ratio of attention weights on the context versus newly generated tokens (for each attention head). We find that a linear classifier based on these lookback ratio features is as effective as a richer detector that utilizes the entire hidden states of an LLM or a text-based entailment model. The lookback ratio-based detector -- Lookback Lens -- is found to transfer across tasks and even models, allowing a detector that is trained on a 7B model to be applied (without retraining) to a larger 13B model. We further apply this detector to mitigate contextual hallucinations, and find that a simple classifier-guided decoding approach is able to reduce the amount of hallucination, for example by 9.6% in the XSum summarization task.
翻译:当被要求总结文章或根据给定段落回答问题时,大型语言模型(LLMs)可能产生细节幻觉,并给出与输入上下文不符、缺乏依据的不准确回答。本文描述了一种检测此类上下文幻觉的简单方法。我们假设上下文幻觉与LLM对提供上下文的关注程度相对于其自身生成内容的关注程度有关。基于这一直觉,我们提出了一种简单的幻觉检测模型,其输入特征由每个注意力头对上下文与新生成词元的注意力权重比值构成。我们发现,基于这些回顾比值特征的线性分类器,其效果与利用LLM全部隐藏状态或基于文本的蕴含模型的更复杂检测器相当。这种基于回顾比值的检测器——回顾透镜——被证明能够跨任务甚至跨模型迁移,使得在70亿参数模型上训练的检测器无需重新训练即可应用于更大的130亿参数模型。我们进一步将该检测器应用于缓解上下文幻觉,发现简单的分类器引导解码方法能够有效减少幻觉量,例如在XSum摘要任务中可降低9.6%。