Hallucination is often regarded as a major impediment for using large language models (LLMs), especially for knowledge-intensive tasks. Even when the training corpus consists solely of true statements, language models still generate hallucinations in the form of amalgamations of multiple facts. We coin this phenomenon as ``knowledge overshadowing'': when we query knowledge from a language model with multiple conditions, some conditions overshadow others, leading to hallucinated outputs. This phenomenon partially stems from training data imbalance, which we verify on both pretrained models and fine-tuned models, over a wide range of LM model families and sizes.From a theoretical point of view, knowledge overshadowing can be interpreted as over-generalization of the dominant conditions (patterns). We show that the hallucination rate grows with both the imbalance ratio (between the popular and unpopular condition) and the length of dominant condition description, consistent with our derived generalization bound. Finally, we propose to utilize overshadowing conditions as a signal to catch hallucination before it is produced, along with a training-free self-contrastive decoding method to alleviate hallucination during inference. Our proposed approach showcases up to 82% F1 for hallucination anticipation and 11.2% to 39.4% hallucination control, with different models and datasets.
翻译:幻觉通常被视为使用大型语言模型(LLMs)的主要障碍,尤其是在知识密集型任务中。即使训练语料仅包含真实陈述,语言模型仍会以多种事实混合的形式生成幻觉。我们将这种现象称为“知识遮蔽”:当我们用多个条件从语言模型中查询知识时,某些条件会遮蔽其他条件,从而导致幻觉输出。这种现象部分源于训练数据的不平衡,我们在预训练模型和微调模型上均验证了这一点,覆盖了广泛的LM模型家族和规模。从理论角度来看,知识遮蔽可解释为主导条件(模式)的过度泛化。我们证明幻觉率随着不平衡比率(流行条件与非流行条件之间)以及主导条件描述长度的增加而增长,这与我们推导出的泛化边界一致。最后,我们提出利用遮蔽条件作为信号,在幻觉产生前进行捕捉,并采用一种无需训练的自对比解码方法在推理过程中减轻幻觉。我们提出的方法在不同模型和数据集上实现了高达82%的F1分数用于幻觉预测,以及11.2%至39.4%的幻觉控制效果。