Reservoir computing (RC), is a class of computational methods such as Echo State Networks (ESN) and Liquid State Machines (LSM) describe a generic method to perform pattern recognition and temporal analysis with any non-linear system. This is enabled by Reservoir Computing being a shallow network model with only Input, Reservoir, and Readout layers where input and reservoir weights are not learned (only the readout layer is trained). LSM is a special case of Reservoir computing inspired by the organization of neurons in the brain and generally refers to spike-based Reservoir computing approaches. LSMs have been successfully used to showcase decent performance on some neuromorphic vision and speech datasets but a common problem associated with LSMs is that since the model is more-or-less fixed, the main way to improve the performance is by scaling up the Reservoir size, but that only gives diminishing rewards despite a tremendous increase in model size and computation. In this paper, we propose two approaches for effectively ensembling LSM models - Multi-Length Scale Reservoir Ensemble (MuLRE) and Temporal Excitation Partitioned Reservoir Ensemble (TEPRE) and benchmark them on Neuromorphic-MNIST (N-MNIST), Spiking Heidelberg Digits (SHD), and DVSGesture datasets, which are standard neuromorphic benchmarks. We achieve 98.1% test accuracy on N-MNIST with a 3600-neuron LSM model which is higher than any prior LSM-based approach and 77.8% test accuracy on the SHD dataset which is on par with a standard Recurrent Spiking Neural Network trained by Backprop Through Time (BPTT). We also propose receptive field-based input weights to the Reservoir to work alongside the Multi-Length Scale Reservoir ensemble model for vision tasks. Thus, we introduce effective means of scaling up the performance of LSM models and evaluate them against relevant neuromorphic benchmarks
翻译:储层计算(RC)是一类计算方法,如回声状态网络(ESN)和液态机(LSM),描述了一种利用任何非线性系统执行模式识别和时间分析的通用方法。这得益于储层计算是一种浅层网络模型,仅包含输入层、储层和读出层,其中输入和储层权重不进行学习(仅训练读出层)。LSM是受大脑神经元组织启发的储层计算特例,通常指基于脉冲的储层计算方法。LSM已成功在一些神经形态视觉和语音数据集上展现出良好性能,但LSM的一个常见问题是:由于模型基本固定,提升性能的主要方式是扩大储层规模,但这仅带来收益递减,而模型规模和计算量却急剧增加。本文提出两种有效集成LSM模型的方法——多长度尺度储层集成(MuLRE)和时序激励分区储层集成(TEPRE),并在标准神经形态基准数据集Neuromorphic-MNIST(N-MNIST)、Spiking Heidelberg Digits(SHD)和DVSGesture上进行了基准测试。我们使用3600个神经元的LSM模型在N-MNIST上实现了98.1%的测试准确率,高于以往任何基于LSM的方法;在SHD数据集上实现了77.8%的测试准确率,与通过时间反向传播(BPTT)训练的标准循环脉冲神经网络性能相当。我们还提出了基于感受野的储层输入权重,以配合多长度尺度储层集成模型处理视觉任务。因此,我们引入了提升LSM模型性能的有效方法,并在相关神经形态基准上进行了评估。