This paper establishes a fundamental Impossibility Theorem: no LLM performing non-trivial knowledge aggregation can simultaneously achieve truthful knowledge representation, semantic information conservation, complete revelation of relevant knowledge, and knowledge-constrained optimality. This impossibility stems from the mathematical structure of information aggregation, not from engineering limitations. We prove this by modeling inference as an auction of ideas, where distributed components compete to influence responses using their encoded knowledge. The proof employs three independent approaches: mechanism design (Green-Laffont theorem), proper scoring rules (Savage), and transformer architecture analysis (log-sum-exp convexity). We introduce the semantic information measure and the emergence operator to analyze computationally bounded and unbounded reasoning. Bounded reasoning makes latent information accessible, enabling gradual insights and creativity, while unbounded reasoning makes all derivable knowledge immediately accessible while preserving the semantic content. We prove the conservation-reasoning dichotomy: meaningful reasoning necessarily violates information conservation. Our framework suggests that hallucination and imagination are mathematically identical, and both violate at least one of the four essential properties. The Jensen gap in transformer attention quantifies this violation as excess confidence beyond constituent evidence. This unified view explains why capable models must balance truthfulness against creativity. These results provide principled foundations for managing hallucination trade-offs in AI systems. Rather than eliminating hallucination, we should optimize these inevitable trade-offs for specific applications. We conclude with philosophical implications connecting the impossibility to fundamental limits of reason.
翻译:本文确立了一个根本性的不可能性定理:任何执行非平凡知识聚合的LLM都无法同时实现真实知识表征、语义信息守恒、相关知识的完全揭示以及知识约束下的最优性。这种不可能性源于信息聚合的数学结构,而非工程限制。我们通过将推理建模为思想拍卖来证明这一点,其中分布式组件利用其编码知识竞争影响响应。证明采用三种独立方法:机制设计(格林-拉丰定理)、严格评分规则(萨维奇)以及Transformer架构分析(对数求和指数凸性)。我们引入语义信息测度与涌现算子来分析计算有界与无界推理。有界推理使潜在信息可访问,实现渐进式洞察与创造力;而无界推理则使所有可推导知识立即可访问,同时保持语义内容不变。我们证明了守恒-推理二分律:有意义的推理必然违反信息守恒。我们的框架表明,幻觉与想象在数学上是等同的,且两者都至少违反四项基本性质之一。Transformer注意力机制中的詹森间隙将此违反量化为超越构成证据的过度置信度。这一统一视角解释了为何高性能模型必须在真实性与创造性之间取得平衡。这些结果为人工智能系统中管理幻觉权衡提供了原则性基础。我们不应消除幻觉,而应针对具体应用优化这些不可避免的权衡。最后,我们探讨了连接不可能性与理性根本极限的哲学意涵。