Large language models often hallucinate with high confidence on "random facts" that lack inferable patterns. We formalize the memorization of such facts as a membership testing problem, unifying the discrete error metrics of Bloom filters with the continuous log-loss of LLMs. By analyzing this problem in the regime where facts are sparse in the universe of plausible claims, we establish a rate-distortion theorem: the optimal memory efficiency is characterized by the minimum KL divergence between score distributions on facts and non-facts. This theoretical framework provides a distinctive explanation for hallucination: even with optimal training, perfect data, and a simplified "closed world" setting, the information-theoretically optimal strategy under limited capacity is not to abstain or forget, but to assign high confidence to some non-facts, resulting in hallucination. We validate this theory empirically on synthetic data, showing that hallucinations persist as a natural consequence of lossy compression.
翻译:大型语言模型常在缺乏可推断模式的"随机事实"上以高置信度产生幻觉。我们将此类事实的记忆形式化为成员测试问题,将布隆过滤器的离散误差度量与LLM的连续对数损失统一起来。通过分析事实在可信主张空间中稀疏存在的情形,我们建立了率失真定理:最优记忆效率由事实与非事实上评分分布间的最小KL散度表征。该理论框架为幻觉现象提供了独特解释:即使在最优训练、完美数据及简化的"封闭世界"设定下,有限容量下信息论最优策略并非弃权或遗忘,而是对部分非事实赋予高置信度,从而导致幻觉。我们在合成数据上实证验证了该理论,表明幻觉作为有损压缩的自然结果持续存在。