Current paradigms in Deep Learning prioritize computational throughput over numerical precision, relying on the assumption that intelligence emerges from statistical correlation at scale. In this paper, we challenge this orthodoxy. We propose the Exactness Hypothesis: that General Intelligence (AGI), specifically high-order causal inference, requires a computational substrate capable of Arbitrary Precision Arithmetic. We argue that the "hallucinations" and logical incoherence seen in current Large Language Models (LLMs) are artifacts of IEEE 754 floating-point approximation errors accumulating over deep compositional functions. To mitigate this, we introduce the Halo Architecture, a paradigm shift to Rational Arithmetic ($\mathbb{Q}$) supported by a novel Exact Inference Unit (EIU). Empirical validation on the Huginn-0125 prototype demonstrates that while 600B-parameter scale BF16 baselines collapse in chaotic systems, Halo maintains zero numerical divergence indefinitely. This work establishes exact arithmetic as a prerequisite for reducing logical uncertainty in System 2 AGI.
翻译:当前深度学习范式优先考虑计算吞吐量而非数值精度,其依据的假设是智能源于大规模统计相关性。本文挑战这一正统观念。我们提出精确性假说:通用人工智能(AGI),特别是高阶因果推理,需要能够执行任意精度运算的计算基础。我们认为当前大型语言模型(LLMs)中出现的“幻觉”与逻辑不一致性,实为IEEE 754浮点数近似误差在深层组合函数中累积的产物。为缓解此问题,我们引入Halo架构——一种向有理数运算($\mathbb{Q}$)的范式转变,其由新型精确推理单元(EIU)提供支持。在Huginn-0125原型上的实证验证表明:当6000亿参数规模的BF16基线模型在混沌系统中崩溃时,Halo架构能无限期保持零数值发散。本工作确立了精确算术作为降低System 2 AGI逻辑不确定性的先决条件。