Prevailing AI training infrastructure assumes reverse-mode automatic differentiation over IEEE-754 arithmetic. The memory overhead of training relative to inference, optimizer complexity, and structural degradation of geometric properties through training are consequences of this arithmetic substrate. This paper develops an alternative training architecture grounded in three prior results: the Dimensional Type System and Deterministic Memory Management framework [6], which establishes stack-eligible gradient allocation and exact quire accumulation as design-time verifiable properties; the Program Hypergraph [8], which establishes grade preservation through geometric algebra computations as a type-level invariant; and the b-posit 2026 standard [10], which makes posit arithmetic tractable across hardware targets conventionally considered inference-only. Their composition enables depth-independent training memory bounded to approximately twice the inference footprint, grade-preserving weight updates, and exact gradient accumulation, applicable uniformly to loss-function-optimized and spike-timing-dependent neuromorphic models. We introduce Bayesian distillation, a mechanism by which the latent prior structure of a general-purpose model is extracted through the ADM training regime, resolving the data-scarcity bootstrapping problem for domain-specific training. For deployment, we introduce warm rotation, an operational pattern in which an updated model transitions into an active inference pathway without service interruption, with structural correctness formalized through PHG certificates and signed version records. The result is a class of domain-specific AI systems that are smaller and more precise than general-purpose models, continuously adaptive, verifiably correct with respect to the physical structure of their domains, and initializable from existing models.
翻译:当前AI训练基础设施普遍采用基于IEEE-754算法的反向模式自动微分。训练相对于推理的内存开销、优化器复杂度,以及训练过程中几何属性的结构性退化,均是该算术基底的直接后果。本文基于三项前期成果构建了一种替代性训练架构:维度类型系统与确定性内存管理框架[6],该框架将适合栈分配的梯度分配与精确quire累加确立为设计时可验证属性;程序超图[8]将几何代数计算中的阶数保持确立为类型层不变量;以及b-posit 2026标准[10]使posit算法在传统上被视为仅支持推理的硬件目标上具有可行性。这三者的组合使得训练内存深度无关地限制在约为推理占用空间的两倍,实现阶数保持的权重更新与精确梯度累加,可统一应用于基于损失函数优化和脉冲时序依赖的神经形态模型。我们引入贝叶斯蒸馏机制,通过ADM训练体系提取通用模型中的潜在先验结构,解决了领域特定训练中的数据稀缺引导问题。针对部署环节,我们提出热切换操作模式,使更新后的模型在不中断服务的情况下转换至活跃推理路径,并通过PHG证书与签名版本记录形式化其结构正确性。由此产生一类领域特定AI系统,其规模小于通用模型且精度更高,具备持续自适应能力,可针对其领域的物理结构进行可验证的正确性验证,并能从现有模型进行初始化。