Programming recurrent spiking neural networks (RSNNs) to robustly perform multi-timescale computation remains a difficult challenge. To address this, we describe a single-shot weight learning scheme to embed robust multi-timescale dynamics into attractor-based RSNNs, by exploiting the properties of high-dimensional distributed representations. We embed finite state machines into the RSNN dynamics by superimposing a symmetric autoassociative weight matrix and asymmetric transition terms, which are each formed by the vector binding of an input and heteroassociative outer-products between states. Our approach is validated through simulations with highly nonideal weights; an experimental closed-loop memristive hardware setup; and on Loihi 2, where it scales seamlessly to large state machines. This work introduces a scalable approach to embed robust symbolic computation through recurrent dynamics into neuromorphic hardware, without requiring parameter fine-tuning or significant platform-specific optimisation. Moreover, it demonstrates that distributed symbolic representations serve as a highly capable representation-invariant language for cognitive algorithms in neuromorphic hardware.
翻译:对循环脉冲神经网络进行编程以鲁棒地执行多时间尺度计算仍然是一个艰巨的挑战。为此,我们描述了一种单次权重学习方案,通过利用高维分布式表示的特性,将鲁棒的多时间尺度动力学嵌入到基于吸引子的循环脉冲神经网络中。我们通过叠加一个对称的自联想权重矩阵和非对称的转移项,将有限状态机嵌入到循环脉冲神经网络的动力学中。这些转移项由输入与状态之间的异联想外积进行向量绑定形成。我们的方法通过以下实验得到验证:使用高度非理想权重的仿真;一个实验性的闭环忆阻器硬件设置;以及在Loihi 2芯片上的实验,其中该方法能够无缝扩展到大型状态机。这项工作引入了一种可扩展的方法,通过循环动力学将鲁棒的符号计算嵌入到神经形态硬件中,而无需进行参数微调或大量的平台特定优化。此外,它证明了分布式符号表示作为一种高度通用的表示不变语言,适用于神经形态硬件中的认知算法。