Behavior can be described as a temporal sequence of actions driven by neural activity. To learn complex sequential patterns in neural networks, memories of past activities need to persist on significantly longer timescales than the relaxation times of single-neuron activity. While recurrent networks can produce such long transients, training these networks is a challenge. Learning via error propagation confers models such as FORCE, RTRL or BPTT a significant functional advantage, but at the expense of biological plausibility. While reservoir computing circumvents this issue by learning only the readout weights, it does not scale well with problem complexity. We propose that two prominent structural features of cortical networks can alleviate these issues: the presence of a certain network scaffold at the onset of learning and the existence of dendritic compartments for enhancing neuronal information storage and computation. Our resulting model for Efficient Learning of Sequences (ELiSe) builds on these features to acquire and replay complex non-Markovian spatio-temporal patterns using only local, always-on and phase-free synaptic plasticity. We showcase the capabilities of ELiSe in a mock-up of birdsong learning, and demonstrate its flexibility with respect to parametrization, as well as its robustness to external disturbances.
翻译:行为可被描述为由神经活动驱动的动作时间序列。为了在神经网络中学习复杂的序列模式,对过去活动的记忆需要持续比单个神经元活动的弛豫时间显著更长的时间尺度。虽然循环网络能够产生这种长时瞬态,但训练这些网络仍具挑战性。通过误差传播进行学习的方法(如FORCE、RTRL或BPTT)赋予模型显著的功能优势,但以牺牲生物合理性为代价。尽管储备池计算通过仅学习读出权重规避了此问题,但其难以随问题复杂性良好扩展。我们提出,皮层网络的两个显著结构特征可缓解这些问题:学习开始时存在的特定网络支架结构,以及用于增强神经元信息存储与计算的树突隔室的存在。我们由此构建的高效序列学习模型(ELiSe)基于这些特征,仅通过局部、持续生效且无相位依赖的突触可塑性,即可获取并复现复杂的非马尔可夫时空模式。我们通过鸟类鸣叫学习的模拟案例展示了ELiSe的能力,并论证了其在参数化方面的灵活性以及对扰动的鲁棒性。