Efficient planning and sequence selection are central to intelligence, yet current approaches remain largely incompatible with biological computation. Classical graph algorithms like Dijkstra's or A* require global state and biologically implausible operations such as backtracing, while reinforcement learning methods rely on slow gradient-based policy updates that appear inconsistent with rapid behavioral adaptation observed in natural systems. We propose a biologically plausible algorithm for shortest-path computation that operates through local spike-based message-passing with realistic processing delays. The algorithm exploits spike-timing coincidences to identify nodes on optimal paths: Neurons that receive inhibitory-excitatory message pairs earlier than predicted reduce their response delays, creating a temporal compression that propagates backwards from target to source. Through analytical proof and simulations on random spatial networks, we demonstrate that the algorithm converges and discovers all shortest paths using purely timing-based mechanisms. By showing how short-term timing dynamics alone can compute shortest paths, this work provides new insights into how biological networks might solve complex computational problems through purely local computation and relative spike-time prediction. These findings open new directions for understanding distributed computation in biological and artificial systems, with possible implications for computational neuroscience, AI, reinforcement learning, and neuromorphic systems.
翻译:高效规划与序列选择是智能的核心,然而现有方法大多与生物计算不兼容。经典图算法(如Dijkstra或A*)需要全局状态和生物学上难以实现的回溯操作,而强化学习方法依赖缓慢的基于梯度的策略更新,这与自然系统中观察到的快速行为适应现象不一致。我们提出一种生物学合理的最短路径计算算法,该算法通过具有实际处理延迟的局部脉冲消息传递实现。算法利用尖峰时序的巧合性识别最优路径上的节点:比预测时间更早接收到抑制-兴奋消息对的神经元会缩短其响应延迟,从而产生从目标向源头反向传播的时间压缩效应。通过对随机空间网络的理论证明与仿真实验,我们证实该算法能够收敛并发现所有最短路径,且仅使用时序机制。通过展示短期时序动态如何独立完成最短路径计算,本研究为理解生物网络如何通过纯局部计算与相对尖峰时间预测解决复杂计算问题提供了新视角。这些发现为理解生物与人工系统中的分布式计算开辟了新方向,对计算神经科学、人工智能、强化学习及神经形态系统具有潜在意义。