Robotic navigation has historically struggled to reconcile reactive, sensor-based control with the decisive capabilities of model-based planners. This duality becomes critical when the absence of a predominant option among goals leads to indecision, challenging reactive systems to break symmetries without computationally-intense planners. We propose a parsimonious neuromorphic control framework that bridges this gap for vision-guided navigation and tracking. Image pixels from an onboard camera are encoded as inputs to dynamic neuronal populations that directly transform visual target excitation into egocentric motion commands. A dynamic bifurcation mechanism resolves indecision by delaying commitment until a critical point induced by the environmental geometry. Inspired by recently proposed mechanistic models of animal cognition and opinion dynamics, the neuromorphic controller provides real-time autonomy with a minimal computational burden, a small number of interpretable parameters, and can be seamlessly integrated with application-specific image processing pipelines. We validate our approach in simulation environments as well as on an experimental quadrotor platform.
翻译:机器人导航历来难以协调基于传感器的反应式控制与基于模型的规划器的决策能力。当多个目标间缺乏主导选项导致系统犹豫不决时,这种二元性变得尤为关键——这要求反应式系统能在不依赖计算密集型规划器的情况下打破对称性。本文提出一种简约的神经形态控制框架,为视觉引导的导航与追踪任务弥合了这一鸿沟。来自机载摄像头的图像像素被编码为动态神经元群的输入,直接将视觉目标激励转化为以自我为中心的运动指令。通过动态分岔机制,系统将决策承诺延迟至由环境几何形态诱导的临界点,从而解决犹豫问题。该设计灵感来源于近期提出的动物认知机制模型与舆论动力学模型,所构建的神经形态控制器能以极小的计算负担实现实时自主运行,其参数数量少且可解释性强,并能与应用特定的图像处理流程无缝集成。我们在仿真环境及实验性四旋翼飞行器平台上验证了该方法的有效性。