Practically everything computers do is better, faster, and more power-efficient than the brain. For example, a calculator performs numerical computations more energy-efficiently than any human. Yet modern AI models are a thousand times less efficient than the brain. These models rely on larger and larger artificial neural networks (ANNs) to boost their encoding capacity, requiring GPUs to perform large-scale matrix multiplications. In contrast, the brain's spiking neural networks (SNNs) exhibit factorially explosive encoding capacity and compute through the polychronization of spikes rather than explicit matrix-vector products, resulting in lower energy requirements. This manifesto proposes a paradigm for framing popular AI models in terms of spiking networks and polychronization, and for interpreting spiking activity as nature's way of implementing look-up tables. This suggests a path toward converting AI models into a novel class of architectures with much smaller size yet combinatorially large encoding capacity, offering the promise of a thousandfold improvement in performance. Code is available at https://github.com/izhikevich/SNN
翻译:实际上,计算机在几乎所有任务上都比人脑表现更好、更快且更节能。例如,计算器执行数值计算的能效远超任何人类。然而,现代人工智能模型的能效却比人脑低一千倍。这些模型依赖越来越庞大的人工神经网络(ANNs)来提升其编码容量,需要GPU执行大规模矩阵乘法运算。相比之下,大脑的脉冲神经网络(SNNs)展现出阶乘爆炸的编码容量,并通过脉冲的多时间同步而非显式的矩阵-向量乘积进行计算,从而显著降低了能耗。本宣言提出一种范式,将主流人工智能模型置于脉冲网络和多时间同步的框架下进行重构,并将脉冲活动解释为实现查找表的自然机制。这为将人工智能模型转化为一类新型架构指明了方向:此类架构规模更小,却具备组合爆炸的编码容量,有望实现千倍的性能提升。代码发布于 https://github.com/izhikevich/SNN