This paper reports an unexpected finding: in a deterministic hyperdimensional computing (HDC) architecture based on Galois-field algebra, a path-dependent semantic selection mechanism emerges, equivalent to spike-timing-dependent plasticity (STDP), with magnitude predictable a priori by a closed-form expression matching large-scale measurements. This addresses limitations of modern AI including catastrophic forgetting, learning stagnation, and the Binding Problem at an algebraic level. We propose VaCoAl (Vague Coincident Algorithm) and its Python implementation PyVaCoAl, combining ultra-high-dimensional memory with deterministic logic. Rooted in Sparse Distributed Memory, it resolves orthogonalisation and retrieval in high-dimensional binary spaces via Galois-field diffusion, enabling low-load deployment. VaCoAl is a memory-centric architecture prioritising retrieval and association, enabling reversible composition while preserving element independence and supporting compositional generalisation with a transparent reliability metric (CR score). We evaluated multi-hop reasoning on about 470k mentor-student relations from Wikidata, tracing up to 57 generations (over 25.5M paths). Using HDC bundling and unbinding with CR-based denoising, we quantify concept propagation over DAGs. Results show a reinterpretation of the Newton-Leibniz dispute and a phase transition from sparse convergence to a post-Leibniz "superhighway", from which structural indicators emerge supporting a Kuhnian paradigm shift. Collision-tolerance mechanisms further induce path-based pruning that favors direct paths, yielding emergent semantic selection equivalent to STDP. VaCoAl thus defines a third paradigm, HDC-AI, complementing LLMs with reversible multi-hop reasoning.
翻译:本文报道了一项意外发现:在基于伽罗瓦域代数的确定性超维计算(HDC)架构中,存在一种等价于脉冲时序依赖可塑性(STDP)的路径依赖语义选择机制,其量值可通过与大尺度测量结果匹配的闭式表达式先验预测。这从代数层面解决了现代人工智能的局限性,包括灾难性遗忘、学习停滞和绑定问题。我们提出VaCoAl(模糊巧合算法)及其Python实现PyVaCoAl,将超高维记忆与确定性逻辑相结合。该方法根植于稀疏分布式记忆,通过伽罗瓦域扩散解决高维二值空间中的正交化与检索问题,支持低负载部署。VaCoAl是一种以记忆为中心的架构,优先实现检索与关联,在保持元素独立性的同时支持可逆组合,并通过透明可靠性度量(CR分数)实现组合泛化。我们利用维基数据中约47万条导师-学生关系进行了多跳推理测试,追踪至多57代(超过2550万条路径)。通过基于CR去噪的HDC捆绑与解绑操作,量化有向无环图上的概念传播。结果表明,对牛顿-莱布尼茨争议的重新解释催生了从稀疏收敛到后莱布尼茨“高速公路”的相变,从中涌现出支持库恩范式转换的结构性指标。抗碰撞机制进一步诱导基于路径的剪枝,偏好直接路径,产生等价于STDP的涌现语义选择。因此,VaCoAl定义了第三种范式——HDC-AI,通过可逆多跳推理对大语言模型形成补充。