Traveling waves of neural activity have been observed throughout the brain at a diversity of regions and scales; however, their precise computational role is still debated. One physically inspired hypothesis suggests that the cortical sheet may act like a wave-propagating system capable of invertibly storing a short-term memory of sequential stimuli through induced waves traveling across the cortical surface, and indeed many experimental results from neuroscience correlate wave activity with memory tasks. To date, however, the computational implications of this idea have remained hypothetical due to the lack of a simple recurrent neural network architecture capable of exhibiting such waves. In this work, we introduce a model to fill this gap, which we denote the Wave-RNN (wRNN), and demonstrate how such an architecture indeed efficiently encodes the recent past through a suite of synthetic memory tasks where wRNNs learn faster and reach significantly lower error than wave-free counterparts. We further explore the implications of this memory storage system on more complex sequence modeling tasks such as sequential image classification and find that wave-based models not only again outperform comparable wave-free RNNs while using significantly fewer parameters, but additionally perform comparably to more complex gated architectures such as LSTMs and GRUs.
翻译:摘要:神经活动的行波已在全脑多个区域及尺度被观测到,但其精确的计算角色仍存争议。一种受物理学启发的假设认为,皮层表面可能如同波传播系统,通过诱导跨越皮层表面的行波,可逆地存储序列性刺激的短期记忆。事实上,神经科学领域的诸多实验结果均将波活动与记忆任务相关联。然而,由于缺乏能呈现此类波的简单循环神经网络架构,这一观点的计算意义至今仍停留于假设阶段。在本研究中,我们提出一种填补此空白的模型——Wave-RNN(wRNN),并通过一系列合成记忆任务证明,该架构确实能高效编码近期历史:与无波模型相比,wRNN学习速度更快且误差显著降低。我们进一步探究了这种记忆存储系统对更复杂序列建模任务(如序列图像分类)的影响,发现基于波的模型不仅以更少参数再次超越同类无波RNN,其性能还可与LSTM和GRU等更复杂门控架构相媲美。