The Hopfield network model and its generalizations were introduced as a model of associative, or content-addressable, memory. They were widely investigated both as a unsupervised learning method in artificial intelligence and as a model of biological neural dynamics in computational neuroscience. The complexity features of biological neural networks are attracting the interest of scientific community since the last two decades. More recently, concepts and tools borrowed from complex network theory were applied to artificial neural networks and learning, thus focusing on the topological aspects. However, the temporal structure is also a crucial property displayed by biological neural networks and investigated in the framework of systems displaying complex intermittency. The Intermittency-Driven Complexity (IDC) approach indeed focuses on the metastability of self-organized states, whose signature is a power-decay in the inter-event time distribution or a scaling behavior in the related event-driven diffusion processes. The investigation of IDC in neural dynamics and its relationship with network topology is still in its early stages. In this work we present the preliminary results of a IDC analysis carried out on a bio-inspired Hopfield-type neural network comparing two different connectivities, i.e., scale-free vs. random network topology. We found that random networks can trigger complexity features similar to that of scale-free networks, even if with some differences and for different parameter values, in particular for different noise levels.
翻译:Hopfield网络模型及其推广形式是作为联想记忆或内容寻址记忆模型提出的。该模型在人工智能领域作为无监督学习方法,以及在计算神经科学领域作为生物神经动力学模型,均得到了广泛研究。近二十年来,生物神经网络的复杂性特征持续吸引着科学界的关注。近年来,复杂网络理论中的概念与工具被应用于人工神经网络与学习过程,从而聚焦于拓扑结构的研究。然而,时间结构同样是生物神经网络表现出的关键特性,并在展现复杂间歇性行为的系统框架下被深入研究。间歇性驱动复杂性(IDC)方法主要关注自组织状态的亚稳态特性,其典型特征表现为事件间隔时间分布中的幂律衰减,或相关事件驱动扩散过程中的标度行为。目前针对神经动力学中IDC现象及其与网络拓扑关系的研究仍处于早期阶段。本研究展示了在仿生Hopfield型神经网络上进行的IDC初步分析结果,比较了两种不同连接拓扑(无标度网络与随机网络)的表现。研究发现,随机网络能够触发与无标度网络相似的复杂性特征,尽管存在某些差异且需对应不同的参数值(特别是不同的噪声水平)。