Current artificial intelligence systems exhibit a fundamental architectural limitation: they resolve ambiguity prematurely. This premature semantic collapse--collapsing multiple valid interpretations into single outputs--stems from classical identity assumptions in neural architectures. We propose Non-Resolution Reasoning (NRR), a framework treating ambiguity retention as a valid reasoning mode. NRR introduces three principles: (1) Non-Identity ($A \neq A$)--the same symbol refers to different entities across contexts; (2) Approximate Identity ($A \approx A$)--entities share partial structural overlap without being identical; (3) Non-Resolution--conflicting interpretations coexist without forced convergence. We formalize these through Multi-Vector Embeddings for context-dependent representation, Non-Collapsing Attention for parallel interpretation retention, and Contextual Identity Tracking (CIT) for maintaining $A \neq A$ across inference. We illustrate NRR through case studies in paradox handling, creative generation, and context-dependent reasoning. Functional verification in a synthetic two-turn disambiguation task shows NRR-lite maintains high entropy ($H = 0.91$ bits, near-maximum $1.0$) at ambiguous turns while standard architectures collapse early ($H = 0.15$ bits), preserving interpretive flexibility until context arrives. NRR challenges the assumption that meaning must collapse to be useful. The question is not whether AI should resolve ambiguity, but when, how, and under whose control.
翻译:当前人工智能系统存在一个根本性的架构局限:它们过早地消解了歧义。这种过早的语义坍缩——将多种有效解释坍缩为单一输出——源于神经架构中的经典身份假设。我们提出了非消解推理(NRR)框架,将歧义保持视为一种有效的推理模式。NRR引入三个原则:(1) 非同一性($A \neq A$)——同一符号在不同上下文中指代不同实体;(2) 近似同一性($A \approx A$)——实体共享部分结构重叠而非完全同一;(3) 非消解性——相互冲突的解释共存,无需强制收敛。我们通过以下方法将其形式化:用于上下文相关表示的多向量嵌入、用于并行解释保持的非坍缩注意力机制,以及用于在推理过程中维持 $A \neq A$ 的上下文身份追踪(CIT)。我们通过悖论处理、创造性生成和上下文相关推理的案例研究来阐述NRR。在一个合成的两轮消歧任务中的功能验证表明,NRR-lite在歧义轮次保持了高熵值($H = 0.91$ 比特,接近最大值 $1.0$),而标准架构过早坍缩($H = 0.15$ 比特),从而在上下文到来前保持了解释的灵活性。NRR挑战了“意义必须坍缩才有用”的假设。问题不在于AI是否应该消解歧义,而在于何时、以何种方式以及在谁的控制下进行消解。