Recurrent neural networks (RNNs) in the brain and in silico excel at solving tasks with intricate temporal dependencies. Long timescales required for solving such tasks can arise from properties of individual neurons (single-neuron timescale, $\tau$, e.g., membrane time constant in biological neurons) or recurrent interactions among them (network-mediated timescale). However, the contribution of each mechanism for optimally solving memory-dependent tasks remains poorly understood. Here, we train RNNs to solve $N$-parity and $N$-delayed match-to-sample tasks with increasing memory requirements controlled by $N$ by simultaneously optimizing recurrent weights and $\tau$s. We find that for both tasks RNNs develop longer timescales with increasing $N$, but depending on the learning objective, they use different mechanisms. Two distinct curricula define learning objectives: sequential learning of a single-$N$ (single-head) or simultaneous learning of multiple $N$s (multi-head). Single-head networks increase their $\tau$ with $N$ and are able to solve tasks for large $N$, but they suffer from catastrophic forgetting. However, multi-head networks, which are explicitly required to hold multiple concurrent memories, keep $\tau$ constant and develop longer timescales through recurrent connectivity. Moreover, we show that the multi-head curriculum increases training speed and network stability to ablations and perturbations, and allows RNNs to generalize better to tasks beyond their training regime. This curriculum also significantly improves training GRUs and LSTMs for large-$N$ tasks. Our results suggest that adapting timescales to task requirements via recurrent interactions allows learning more complex objectives and improves the RNN's performance.
翻译:大脑中的循环神经网络(RNN)与计算机模拟的RNN均擅长解决具有复杂时间依赖性的任务。解决此类任务所需的长时程特性可能源于单个神经元的属性(单神经元时间尺度$\tau$,例如生物神经元的膜时间常数),也可能源于神经元之间的循环相互作用(网络介导的时间尺度)。然而,每种机制对最优解决记忆依赖任务的贡献仍不清楚。本文通过同时优化循环权重与$\tau$,训练RNN解决记忆需求随$N$增加的$N$-奇偶校验与$N$-延迟匹配样本任务。我们发现,对于这两类任务,RNN均会随$N$增大而发展出更长的时间尺度,但根据学习目标的不同,它们会采用不同的机制。两种不同的课程定义了学习目标:单一$N$任务的顺序学习(单头)或多个$N$任务的同时学习(多头)。单头网络随$N$增大而增加其$\tau$,能够解决大$N$任务,但会遭受灾难性遗忘。而多头网络被明确要求保持多个并发记忆,其保持$\tau$不变,通过循环连接发展出更长的时间尺度。此外,我们证明多头课程能提升训练速度、增强网络对切除与扰动的稳定性,并使RNN能更好地泛化到训练范围之外的任务。该课程还显著改善了GRU与LSTM在大$N$任务上的训练效果。我们的结果表明,通过循环相互作用使时间尺度适应任务需求,有助于学习更复杂的目标并提升RNN的性能。