Intrinsic within-type neuronal heterogeneity is a ubiquitous feature of biological systems, with well-documented computational advantages. Recent works in machine learning have incorporated such diversities by optimizing neuronal parameters alongside synaptic connections and demonstrated state-of-the-art performance across common benchmarks. However, this performance gain comes at the cost of significantly higher computational costs, imposed by a larger parameter space. Furthermore, it is unclear how the neuronal parameters, constrained by the biophysics of their surroundings, are globally orchestrated to minimize top-down errors. To address these challenges, we postulate that neurons are intrinsically diverse, and investigate the computational capabilities of such heterogeneous neuronal parameters. Our results show that intrinsic heterogeneity, viewed as a fixed quenched disorder, often substantially improves performance across hundreds of temporal tasks. Notably, smaller but heterogeneous networks outperform larger homogeneous networks, despite consuming less data. We elucidate the underlying mechanisms driving this performance boost and illustrate its applicability to both rate and spiking dynamics. Moreover, our findings demonstrate that heterogeneous networks are highly resilient to severe alterations in their recurrent synaptic hyperparameters, and even recurrent connections removal does not compromise performance. The remarkable effectiveness of heterogeneous networks with small sizes and relaxed connectivity is particularly relevant for the neuromorphic community, which faces challenges due to device-to-device variability. Furthermore, understanding the mechanism of robust computation with heterogeneity also benefits neuroscientists and machine learners.
翻译:内在的同类神经元异质性是生物系统的一个普遍特征,其计算优势已得到充分证实。机器学习领域的最新研究通过优化神经元参数及突触连接融入了这种多样性,并在常见基准测试中展现出最先进的性能。然而,这种性能提升是以显著更高的计算成本为代价的,这源于更大的参数空间。此外,目前尚不清楚受周围生物物理约束的神经元参数如何被全局协调以最小化自上而下的误差。为应对这些挑战,我们假设神经元本质上是多样化的,并研究此类异质神经元参数的计算能力。我们的结果表明,被视为固定淬火无序的内在异质性,常常能在数百项时序任务中显著提升性能。值得注意的是,规模较小但异质的网络性能优于更大的同质网络,尽管其消耗的数据更少。我们阐明了驱动这种性能提升的内在机制,并说明了其在发放率动力学与脉冲动力学中的适用性。此外,我们的研究证明异质网络对其循环突触超参数的剧烈改变具有高度弹性,即使移除循环连接也不会影响性能。这种小规模、低连接要求的异质网络所表现出的卓越效能,对于面临器件间变异挑战的神经形态计算领域尤为相关。同时,理解异质性下的鲁棒计算机制也有益于神经科学家和机器学习研究者。