Reasoning is the ability to integrate internal states and external inputs in a meaningful and semantically consistent flow. Contemporary machine learning (ML) systems increasingly rely on such sequential reasoning, from language understanding to multi-modal generation, often operating over dictionaries of prototypical patterns reminiscent of associative memory models. Understanding retrieval and sequentiality in associative memory models provides a powerful bridge to gain insight into ML reasoning. While the static retrieval properties of associative memory models are well understood, the theoretical foundations of sequential retrieval and multi-memory integration remain limited, with existing studies largely relying on numerical evidence. This work develops a dynamical theory of sequential reasoning in Hopfield networks. We consider the recently proposed input-driven plasticity (IDP) Hopfield network and analyze a two-timescale architecture coupling fast associative retrieval with slow reasoning dynamics. We derive explicit conditions for self-sustained memory transitions, including gain thresholds, escape times, and collapse regimes. Together, these results provide a principled mathematical account of sequentiality in associative memory models, bridging classical Hopfield dynamics and modern reasoning architectures.
翻译:推理是一种将内部状态与外部输入整合为有意义且语义一致的信息流的能力。当代机器学习系统日益依赖此类序列推理能力,从语言理解到多模态生成,其操作常基于原型模式词典,这令人联想到联想记忆模型。理解联想记忆模型中的检索与序列性,为洞察机器学习推理提供了重要桥梁。尽管联想记忆模型的静态检索特性已得到充分研究,但序列检索与多记忆整合的理论基础仍显不足,现有研究主要依赖数值证据。本研究发展了Hopfield网络中序列推理的动力学理论。我们考察近期提出的输入驱动可塑性Hopfield网络,分析了耦合快速联想检索与慢速推理动力学的双时间尺度架构。我们推导出自维持记忆转换的显式条件,包括增益阈值、逃逸时间和坍缩机制。这些结果共同为联想记忆模型中的序列性提供了原理性数学解释,架起了经典Hopfield动力学与现代推理架构之间的桥梁。