Task-trained recurrent neural networks (RNNs) are versatile models of dynamical processes widely used in machine learning and neuroscience. While RNNs are easily trained to perform a wide range of tasks, the nature and extent of the degeneracy in the resultant solutions (i.e., the variability across trained RNNs) remain poorly understood. Here, we provide a unified framework for analyzing degeneracy across three levels: behavior, neural dynamics, and weight space. We analyzed RNNs trained on diverse tasks across machine learning and neuroscience domains, including N-bit flip-flop, sine wave generation, delayed discrimination, and path integration. Our key finding is that the variability across RNN solutions, quantified on the basis of neural dynamics and trained weights, depends primarily on network capacity and task characteristics such as complexity. We introduce information-theoretic measures to quantify task complexity and demonstrate that increasing task complexity consistently reduces degeneracy in neural dynamics and generalization behavior while increasing degeneracy in weight space. These relationships hold across diverse tasks and can be used to control the degeneracy of the solution space of task-trained RNNs. Furthermore, we provide several strategies to control solution degeneracy, enabling task-trained RNNs to learn more consistent or diverse solutions as needed. We envision that these insights will lead to more reliable machine learning models and could inspire strategies to better understand and control degeneracy observed in neuroscience experiments.
翻译:任务训练的循环神经网络(RNN)是机器学习和神经科学中广泛使用的通用动力学过程模型。尽管RNN易于训练以执行多种任务,但所得解的简并性(即训练后RNN之间的变异性)的本质和程度仍不甚明确。本文提出了一个统一框架,用于在三个层面分析简并性:行为、神经动力学和权重空间。我们分析了在机器学习和神经科学领域中多种任务上训练的RNN,包括N位触发器、正弦波生成、延迟辨别和路径整合。我们的核心发现是:基于神经动力学和训练权重量化的RNN解变异性,主要取决于网络容量及任务复杂度等特征。我们引入信息论度量来量化任务复杂度,并证明增加任务复杂度会持续降低神经动力学和泛化行为的简并性,同时增加权重空间的简并性。这些关系在不同任务中普遍成立,可用于控制任务训练RNN解空间的简并性。此外,我们提出了多种控制解简并性的策略,使任务训练RNN能够根据需要学习更一致或更多样化的解。我们预期这些见解将有助于开发更可靠的机器学习模型,并可能启发新策略以更好地理解和控制神经科学实验中观察到的简并性。