Neural populations exhibit latent dynamical structures that drive time-evolving spiking activities, motivating the search for models that capture both intrinsic network dynamics and external unobserved influences. In this work, we introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation. Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and stochastic forces -- to represent both autonomous and non-autonomous processes in neural systems. Crucially, the potential function is parameterized as a network of locally coupled oscillators, biasing the model toward oscillatory and flow-like behaviors observed in biological neural populations. Our model features a recurrent encoder, a one-layer Transformer decoder, and Langevin dynamics in the latent space. Empirically, our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor, closely matching ground-truth firing rates. On the Neural Latents Benchmark (NLB), the model achieves superior held-out neuron likelihoods (bits per spike) and forward prediction accuracy across four challenging datasets. It also matches or surpasses alternative methods in decoding behavioral metrics such as hand velocity. Overall, this work introduces a flexible, physics-inspired, high-performing framework for modeling complex neural population dynamics and their unobserved influences.
翻译:神经群体展现出驱动时间演化尖峰活动的潜在动力学结构,这促使我们寻找既能捕捉内在网络动态又能反映外部未观测影响的模型。本研究提出LangevinFlow,一种序列变分自编码器,其潜在变量的时间演化由欠阻尼朗之万方程控制。该方法融合了物理先验——如惯性、阻尼、学习得到的势函数及随机力——以表征神经系统中自主与非自主过程。关键创新在于将势函数参数化为局部耦合振荡器网络,使模型偏向于生物神经群体中观察到的振荡与类流行为。模型包含循环编码器、单层Transformer解码器及潜在空间中的朗之万动力学。实证研究表明,在洛伦兹吸引子生成的合成神经群体数据上,本方法优于现有基线模型,能紧密匹配真实发放率。在神经潜在基准测试(NLB)中,该模型在四个挑战性数据集上实现了优异的保持神经元似然度(每尖峰比特数)和前向预测精度,同时在解码行为指标(如手部速度)方面达到或超越了替代方法。总体而言,本研究提出了一个灵活、受物理启发的高性能框架,用于建模复杂神经群体动力学及其未观测影响。