This paper reports an unexpected finding: in a deterministic hyperdimensional computing (HDC) architecture based on Galois-field algebra, a path-dependent semantic selection mechanism emerges, equivalent to spike-timing-dependent plasticity (STDP), with magnitude predictable a priori by a closed-form expression matching large-scale measurements. This addresses limitations of modern AI including catastrophic forgetting, learning stagnation, and the Binding Problem at an algebraic level. We propose VaCoAl (Vague Coincident Algorithm) and its Python implementation PyVaCoAl, combining ultra-high-dimensional memory with deterministic logic. Rooted in Sparse Distributed Memory, it resolves orthogonalisation and retrieval in high-dimensional binary spaces via Galois-field diffusion, enabling low-load deployment. VaCoAl is a memory-centric architecture prioritising retrieval and association, enabling reversible composition while preserving element independence and supporting compositional generalisation with a transparent reliability metric (CR score). We evaluated multi-hop reasoning on about 470k mentor-student relations from Wikidata, tracing up to 57 generations (over 25.5M paths). Using HDC bundling and unbinding with CR-based denoising, we quantify concept propagation over DAGs. Results show a reinterpretation of the Newton-Leibniz dispute and a phase transition from sparse convergence to a post-Leibniz "superhighway", from which structural indicators emerge supporting a Kuhnian paradigm shift. Collision-tolerance mechanisms further induce path-based pruning that favors direct paths, yielding emergent semantic selection equivalent to STDP. VaCoAl thus defines a third paradigm, HDC-AI, complementing LLMs with reversible multi-hop reasoning.
翻译:本文报告了一项意外发现:在基于伽罗瓦域代数的确定性超维计算(HDC)架构中,涌现出一种路径依赖的语义选择机制,其与脉冲时序依赖可塑性(STDP)等效,且其幅度可通过与大规模测量结果匹配的闭式表达式先验预测。该发现从代数层面解决了现代人工智能的局限性,包括灾难性遗忘、学习停滞以及绑定问题。我们提出VaCoAl(模糊重合算法)及其Python实现PyVaCoAl,将超高维记忆与确定性逻辑相结合。该方法根植于稀疏分布式记忆,通过伽罗瓦域扩散解决高维二值空间中的正交化与检索问题,从而实现低负载部署。VaCoAl是一种以记忆为中心的架构,优先考虑检索与关联,支持可逆组合同时保持元素独立性,并利用透明可靠性指标(CR分数)实现组合泛化。我们基于维基数据中约47万条导师-学生关系对多跳推理进行了评估,追溯多达57代(超过2550万条路径)。通过基于CR去噪的HDC捆绑与解绑操作,我们量化了有向无环图(DAG)上的概念传播。结果显示了对牛顿-莱布尼茨之争的重新诠释,以及从稀疏收敛到后莱布尼茨“超高速公路”的相变,从中涌现出支持库恩范式转换的结构性指标。碰撞容忍机制进一步诱导了路径修剪,优先保留直接路径,从而产生等效于STDP的涌现性语义选择。因此,VaCoAl定义了第三范式——HDC-AI,通过可逆多跳推理补充了大语言模型。