Large language models (LLMs) demonstrate great performance in text generation. However, LLMs are still suffering from hallucinations. In this work, we propose an inference-time method, Self-Highlighted Hesitation (SH2), to help LLMs decode more truthfully. SH2 is based on a simple fact rooted in information theory that for an LLM, the tokens predicted with lower probabilities are prone to be more informative than others. Our analysis shows that the tokens assigned with lower probabilities by an LLM are more likely to be closely related to factual information, such as nouns, proper nouns, and adjectives. Therefore, we propose to ''highlight'' the factual information by selecting the tokens with the lowest probabilities and concatenating them to the original context, thus forcing the model to repeatedly read and hesitate on these tokens before generation. During decoding, we also adopt contrastive decoding to emphasize the difference in the output probabilities brought by the hesitation. Experimental results demonstrate that our SH2, requiring no additional data or models, can effectively help LLMs elicit factual knowledge and distinguish hallucinated contexts. Significant and consistent improvements are achieved by SH2 for LLaMA-7b, LLaMA2-7b and Mistral-7b on multiple hallucination tasks.
翻译:大型语言模型(LLM)在文本生成任务中展现出卓越性能,但仍面临幻觉问题。本研究提出一种推理阶段方法——自突出犹豫机制(SH2),以帮助LLM实现更真实的解码。SH2基于信息论中的一个基本事实:对LLM而言,低概率预测的token往往比其他token包含更多信息。我们的分析表明,LLM分配较低概率的token更可能与事实信息(如名词、专有名词和形容词)紧密相关。因此,我们提出通过选择最低概率的token并将其拼接至原始上下文来“突出”事实信息,从而迫使模型在生成前反复阅读并“犹豫”处理这些token。在解码过程中,我们采用对比解码以增强犹豫机制带来的输出概率差异。实验结果表明,无需额外数据或模型的SH2能有效帮助LLM提取事实知识并识别幻觉上下文。在多个幻觉检测任务中,SH2为LLaMA-7b、LLaMA2-7b和Mistral-7b模型带来了显著且一致的性能提升。