Symbolic systems operate over exact identities: variables denote specific objects, pointers target precise memory locations, and database keys refer to singular records. Neural embeddings generalize by compressing away semantic detail, but this compression creates collision ambiguity: multiple distinct entities can share the same representation value. We characterize exactly how much additional information must be supplied to recover precise identity from such representations. The answer is controlled by a single combinatorial object: the collision-fiber geometry of the representation map $π$. Let $A_π=\max_u |π^{-1}(u)|$ be the largest collision fiber. We prove a tight fixed-length converse $L \ge \log_2 A_π$, an exact finite-block scaling law, a pointwise adaptive budget $\lceil \log_2 |π^{-1}(u)|\rceil$, and the rate-distortion tradeoff with an explicit distortion floor when identity bits are withheld. The same fiber geometry determines query complexity and canonical structure for distinguishing families. Because this residual ambiguity is structural rather than representation-specific, symbolic identity mechanisms (handles, keys, pointers, nominal tags) are the necessary system-level complement to any non-injective semantic representation. All main results are machine-checked in Lean 4. Keywords: semantics-aware compression, zero-error coding, neurosymbolic systems, learned representations, side information
翻译:符号系统基于精确身份运作:变量表示特定对象,指针指向精确内存位置,数据库键引用唯一记录。神经嵌入通过压缩语义细节实现泛化,但这种压缩会产生碰撞歧义:多个不同实体可能共享相同的表示值。我们精确刻画了从此类表示中恢复精确身份所需补充的信息量。答案由单一组合对象控制:表示映射 $π$ 的碰撞纤维几何结构。令 $A_π=\max_u |π^{-1}(u)|$ 表示最大碰撞纤维。我们证明了紧致的定长逆定理 $L \ge \log_2 A_π$、精确的有限块缩放律、逐点自适应预算 $\lceil \log_2 |π^{-1}(u)|\rceil$,以及在身份位被保留时的率失真权衡及其显式失真下限。相同的纤维几何结构决定了区分族的查询复杂度与规范结构。由于这种残余歧义是结构性的而非表示特定的,符号身份机制(句柄、键、指针、名义标签)成为任何非单射语义表示在系统层面必要的补充。所有主要结果均在 Lean 4 中完成机器验证。关键词:语义感知压缩、零错误编码、神经符号系统、学习表示、边信息