Biological learning unfolds continuously in time, yet most algorithmic models rely on discrete updates and separate inference and learning phases. We study a continuous-time neural model that unifies several biologically plausible learning algorithms and removes the need for phase separation. Rules including stochastic gradient descent (SGD), feedback alignment (FA), direct feedback alignment (DFA), and Kolen-Pollack (KP) emerge naturally as limiting cases of the dynamics. Simulations show that these continuous-time networks stably learn at biological timescales, even under temporal mismatches and integration noise. Through analysis and simulation, we show that learning depends on temporal overlap: a synapse updates correctly only when its input and the corresponding error signal coincide in time. When inputs are held constant, learning strength declines linearly as the delay between input and error approaches the stimulus duration, explaining observed robustness and failure across network depths. Critically, robust learning requires the synaptic plasticity timescale to exceed the stimulus duration by one to two orders of magnitude. For typical cortical stimuli (tens of milliseconds), this places the functional plasticity window in the few-second range, a testable prediction that identifies seconds-scale eligibility traces as necessary for error-driven learning in biological circuits.
翻译:生物学习在连续时间中展开,然而大多数算法模型依赖于离散更新以及分离的推断与学习阶段。我们研究了一个连续时间神经模型,该模型统一了多种生物合理的学习算法,并消除了阶段分离的需求。包括随机梯度下降(SGD)、反馈对齐(FA)、直接反馈对齐(DFA)和Kolen-Pollack(KP)在内的规则作为动力学的极限情况自然涌现。仿真表明,这些连续时间网络能够在生物时间尺度上稳定学习,即使在存在时间错配和积分噪声的情况下。通过分析与仿真,我们证明学习依赖于时间重叠:一个突触仅当其输入与相应的误差信号在时间上重合时才能正确更新。当输入保持恒定时,学习强度随着输入与误差之间延迟接近刺激持续时间而线性下降,这解释了在不同网络深度下观察到的鲁棒性与失败。关键的是,鲁棒学习要求突触可塑性时间尺度超过刺激持续时间一到两个数量级。对于典型的皮层刺激(数十毫秒),这将功能性可塑性窗口置于数秒范围内,这是一个可检验的预测,表明秒级资格迹是生物回路中误差驱动学习所必需的。