Current AI systems exhibit a fundamental limitation: they resolve ambiguity prematurely. This premature semantic collapse--collapsing multiple valid interpretations into single outputs--stems from classical identity assumptions in neural architectures. We propose Non-Resolution Reasoning (NRR), treating ambiguity retention as a valid reasoning mode. NRR introduces three principles: (1) Non-Identity ($A \neq A$)--the same symbol refers to different entities across contexts; (2) Approximate Identity ($A \approx A$)--entities share partial overlap without being identical; (3) Non-Resolution--conflicting interpretations coexist without forced convergence. We formalize these through Multi-Vector Embeddings, Non-Collapsing Attention, and Contextual Identity Tracking (CIT). Functional verification via Turn 1 Entropy measurement shows NRR-lite maintains high entropy ($H = 0.63$) at ambiguous turns while standard architectures collapse early ($H = 0.10$), demonstrating that NRR preserves interpretive flexibility until context arrives. The question is not whether AI should resolve ambiguity, but when, how, and under whose control.
翻译:当前人工智能系统存在一个根本性局限:它们过早地消解歧义。这种过早的语义坍缩——将多种有效解释折叠为单一输出——源于神经网络架构中的经典身份假设。我们提出非消解推理(NRR),将歧义保持视为一种有效的推理模式。NRR引入三个原则:(1)非同一性($A \neq A$)——同一符号在不同上下文中指代不同实体;(2)近似同一性($A \approx A$)——实体存在部分重叠而非完全等同;(3)非消解性——冲突的解释可共存而无需强制收敛。我们通过多向量嵌入、非坍缩注意力与上下文身份追踪(CIT)对这些原则进行形式化。基于第一轮熵值测量的功能验证表明,NRR-lite在歧义轮次保持高熵值($H = 0.63$),而标准架构过早坍缩($H = 0.10$),这证明NRR能在上下文信息到来前保持解释灵活性。问题的关键不在于人工智能是否应消解歧义,而在于何时消解、如何消解以及由谁控制消解过程。