Large language models (LLMs) are prone to hallucinations, which sparked a widespread effort to detect and prevent them. Recent work attempts to mitigate hallucinations by intervening in the model's generation, typically computing representative vectors of hallucinations vs. grounded generations, for steering the model's hidden states away from a hallucinatory state. However, common studies employ different setups and do not properly separate different possible causes of hallucinations, making interventions misguided. In this work, we introduce a method for categorizing examples based on the model's prior knowledge, named WACK. We construct WACK benchmarks that support interventions in two settings: open-book and closed-book question answering. Using the benchmarks, we perform an extensive investigation of the effect of different choices for intervention, such as the intervened components, and how often and how strongly to intervene. We find that intervention success varies depending on the component, with the attention blocks performing well and the residual stream proving detrimental to language modeling capabilities. We also show that interventions can benefit from representative vectors collected before, rather than after, a hallucination occurs. Finally, we introduce a new dynamic intervention, which intervenes only if needed, and thus is more robust than standard static interventions. The code is available at https://github.com/technion-cs-nlp/hallucination-mitigation .
翻译:大型语言模型(LLMs)容易产生幻觉,这引发了检测和预防幻觉的广泛研究。近期工作试图通过干预模型生成过程来缓解幻觉,通常通过计算幻觉与基于事实的生成之间的表征向量,从而引导模型的隐藏状态远离幻觉状态。然而,现有研究常采用不同实验设置,且未能有效区分导致幻觉的不同潜在原因,导致干预措施存在偏差。本研究提出一种基于模型先验知识对样本进行分类的方法,命名为WACK。我们构建了支持开放式与封闭式问答两种场景干预的WACK基准。利用该基准,我们深入探究了不同干预选择的影响,包括干预组件、干预频率与强度等。研究发现干预效果因组件而异:注意力模块表现良好,而残差连接会损害语言建模能力。我们还证明,使用幻觉发生前(而非发生后)收集的表征向量进行干预能获得更好效果。最后,我们提出一种动态干预方法,仅在必要时进行干预,其鲁棒性优于标准静态干预。代码已发布于 https://github.com/technion-cs-nlp/hallucination-mitigation。