Despite numerous attempts at mitigation since the inception of language models, hallucinations remain a persistent problem even in today's frontier LLMs. Why is this? We review existing definitions of hallucination and fold them into a single, unified definition wherein prior definitions are subsumed. We argue that hallucination can be unified by defining it as simply inaccurate (internal) world modeling, in a form where it is observable to the user. For example, stating a fact which contradicts a knowledge base OR producing a summary which contradicts the source. By varying the reference world model and conflict policy, our framework unifies prior definitions. We argue that this unified view is useful because it forces evaluations to clarify their assumed reference "world", distinguishes true hallucinations from planning or reward errors, and provides a common language for comparison across benchmarks and discussion of mitigation strategies. Building on this definition, we outline plans for a family of benchmarks using synthetic, fully specified reference world models to stress-test and improve world modeling components.
翻译:自语言模型诞生以来,尽管已有众多缓解尝试,幻觉问题在当今前沿大语言模型中依然顽固存在。原因何在?本文梳理了现有幻觉定义,并将其整合为一个统一框架,使先前定义成为其特例。我们认为,幻觉可通过定义为“用户可观察到的、不准确的(内部)世界建模”而实现统一。例如:陈述与知识库相悖的事实,或生成与源材料矛盾的摘要。通过变换参照世界模型与冲突判定策略,本框架统一了既往定义。这一统一定义具有三重价值:迫使评估方案澄清其预设的参照“世界”,区分真实幻觉与规划/奖励误差,并为跨基准比较及缓解策略讨论提供共同话语体系。基于此定义,我们规划构建一系列使用合成化、全参数化参照世界模型的基准测试,以压力测试并改进世界建模组件。