Continuous attractors offer a unique class of solutions for storing continuous-valued variables in recurrent system states for indefinitely long time intervals. Unfortunately, continuous attractors suffer from severe structural instability in general--they are destroyed by most infinitesimal changes of the dynamical law that defines them. This fragility limits their utility especially in biological systems as their recurrent dynamics are subject to constant perturbations. We observe that the bifurcations from continuous attractors in theoretical neuroscience models display various structurally stable forms. Although their asymptotic behaviors to maintain memory are categorically distinct, their finite-time behaviors are similar. We build on the persistent manifold theory to explain the commonalities between bifurcations from and approximations of continuous attractors. Fast-slow decomposition analysis uncovers the persistent manifold that survives the seemingly destructive bifurcation. Moreover, recurrent neural networks trained on analog memory tasks display approximate continuous attractors with predicted slow manifold structures. Therefore, continuous attractors are functionally robust and remain useful as a universal analogy for understanding analog memory.
翻译:连续吸引子提供了一类独特的解决方案,用于在循环系统状态中存储连续值变量,并维持无限长的时间间隔。然而,连续吸引子通常存在严重的结构不稳定性——它们会被定义它们的动力学定律的绝大多数无穷小变化所破坏。这种脆弱性限制了它们的实用性,尤其是在生物系统中,因为其循环动力学不断受到扰动。我们观察到,理论神经科学模型中从连续吸引子产生的分岔呈现出多种结构稳定的形式。尽管它们在维持记忆的渐近行为上存在本质区别,但其有限时间行为是相似的。我们基于持久流形理论来解释从连续吸引子分岔及其近似之间的共性。快慢分解分析揭示了在看似破坏性的分岔后仍然存活的持久流形。此外,在模拟记忆任务上训练的循环神经网络展现出具有预测慢流形结构的近似连续吸引子。因此,连续吸引子在功能上是稳健的,并且作为理解模拟记忆的通用类比仍然具有实用价值。