Hardware-based neuromorphic computing remains an elusive goal with the potential to profoundly impact future technologies and deepen our understanding of emergent intelligence. The learning-from-mistakes algorithm is one of the few training algorithms inspired by the brain's simple learning rules, utilizing inhibition and pruning to demonstrate self-organized learning. Here we implement this algorithm in purely neuromorphic memristive hardware through a co-design process. This implementation requires evaluating hardware trade-offs and constraints. It has been shown that learning-from-mistakes successfully trains small networks to function as binary classifiers and perceptrons. However, without tailoring the hardware to the algorithm, performance decreases exponentially as the network size increases. When implementing neuromorphic algorithms on neuromorphic hardware, we investigate the trade-offs between depth, controllability, and capacity, the latter being the number of learnable patterns. We emphasize the significance of topology and the use of governing equations, demonstrating theoretical tools to aid in the co-design of neuromorphic hardware and algorithms. We provide quantitative techniques to evaluate the computational capacity of a neuromorphic device based on the measurements performed and the underlying circuit structure. This approach shows that breaking the symmetry of a neural network can increase both the controllability and average network capacity. By pruning the circuit, neuromorphic algorithms in all-memristive device circuits leverage stochastic resources to drive local contrast in network weights. Our combined experimental and simulation efforts explore the parameters that make a network suited for displaying emergent intelligence from simple rules.
翻译:基于硬件的神经形态计算仍然是一个难以实现的目标,其潜力可能深刻影响未来技术并加深我们对涌现智能的理解。"从错误中学习"算法是少数受大脑简单学习规则启发的训练算法之一,它利用抑制和剪枝机制来展示自组织学习能力。本文通过协同设计过程,在纯神经形态忆阻硬件中实现了该算法。这一实现需要评估硬件的权衡与约束条件。研究表明,从错误中学习算法能成功训练小型网络实现二元分类器和感知机功能。然而,若硬件未针对算法进行定制,随着网络规模扩大,性能将呈指数级下降。在神经形态硬件上实现神经形态算法时,我们研究了网络深度、可控性与容量(即可学习模式数量)之间的权衡关系。我们强调拓扑结构的重要性及控制方程的应用,展示了辅助神经形态硬件与算法协同设计的理论工具。我们提出了基于实际测量和底层电路结构的定量技术,用于评估神经形态器件的计算容量。该方法表明,打破神经网络的对称性可以同时提升可控性和平均网络容量。通过电路剪枝,全忆阻器件电路中的神经形态算法能够利用随机资源驱动网络权重的局部对比度提升。我们结合实验与仿真工作,探索了使网络适合通过简单规则展现涌现智能的关键参数。