In both machine learning and in computational neuroscience, plasticity in functional neural networks is frequently expressed as gradient descent on a cost. Often, this imposes symmetry constraints that are difficult to reconcile with local computation, as is required for biological networks or neuromorphic hardware. For example, wake-sleep learning in networks characterized by Boltzmann distributions assumes symmetric connectivity. Similarly, the error backpropagation algorithm is notoriously plagued by the weight transport problem between the representation and the error stream. Existing solutions such as feedback alignment circumvent the problem by deferring to the robustness of these algorithms to weight asymmetry. However, they scale poorly with network size and depth. We introduce spike-based alignment learning (SAL), a complementary learning rule for spiking neural networks, which uses spike timing statistics to extract and correct the asymmetry between effective reciprocal connections. Apart from being spike-based and fully local, our proposed mechanism takes advantage of noise. Based on an interplay between Hebbian and anti-Hebbian plasticity, synapses can thereby recover the true local gradient. This also alleviates discrepancies that arise from neuron and synapse variability -- an omnipresent property of physical neuronal networks. We demonstrate the efficacy of our mechanism using different spiking network models. First, SAL can significantly improve convergence to the target distribution in probabilistic spiking networks versus Hebbian plasticity alone. Second, in neuronal hierarchies based on cortical microcircuits, SAL effectively aligns feedback weights to the forward pathway, thus allowing the backpropagation of correct feedback errors. Third, our approach enables competitive performance in deep networks using only local plasticity for weight transport.
翻译:在机器学习与计算神经科学领域,功能性神经网络的塑性常被表述为对代价函数的梯度下降。这通常需要施加对称性约束,而此类约束难以与生物神经网络或神经形态硬件所要求的局部计算相协调。例如,基于玻尔兹曼分布的网络进行醒睡学习时需假设连接具有对称性;类似地,误差反向传播算法也长期受困于表征流与误差流之间的权重传输问题。现有解决方案(如反馈对齐)通过依赖算法对权重不对称性的鲁棒性来规避该问题,但这些方法随网络规模与深度的增加而扩展性较差。本文提出脉冲对齐学习(SAL),一种用于脉冲神经网络的互补学习规则,其利用脉冲时序统计量提取并校正有效双向连接间的不对称性。除具有脉冲驱动与完全局部化的特性外,所提机制还能有效利用噪声。基于赫布型与反赫布型塑性的相互作用,突触可借此恢复真实的局部梯度。这同时缓解了由神经元与突触变异性(物理神经元网络的普遍特性)引起的偏差。我们通过多种脉冲网络模型验证了该机制的有效性:首先,在概率脉冲网络中,相较于单纯赫布型塑性,SAL能显著提升网络向目标分布的收敛速度;其次,在基于皮层微环路的神经层级结构中,SAL能有效将反馈权重与前向通路对齐,从而实现正确反馈误差的反向传播;最后,该方法在深度网络中仅通过局部塑性进行权重传输即可实现具有竞争力的性能。