Vision-based object tracking is a critical component for achieving autonomous aerial navigation, particularly for obstacle avoidance. Neuromorphic Dynamic Vision Sensors (DVS) or event cameras, inspired by biological vision, offer a promising alternative to conventional frame-based cameras. These cameras can detect changes in intensity asynchronously, even in challenging lighting conditions, with a high dynamic range and resistance to motion blur. Spiking neural networks (SNNs) are increasingly used to process these event-based signals efficiently and asynchronously. Meanwhile, physics-based artificial intelligence (AI) provides a means to incorporate system-level knowledge into neural networks via physical modeling. This enhances robustness, energy efficiency, and provides symbolic explainability. In this work, we present a neuromorphic navigation framework for autonomous drone navigation. The focus is on detecting and navigating through moving gates while avoiding collisions. We use event cameras for detecting moving objects through a shallow SNN architecture in an unsupervised manner. This is combined with a lightweight energy-aware physics-guided neural network (PgNN) trained with depth inputs to predict optimal flight times, generating near-minimum energy paths. The system is implemented in the Gazebo simulator and integrates a sensor-fused vision-to-planning neuro-symbolic framework built with the Robot Operating System (ROS) middleware. This work highlights the future potential of integrating event-based vision with physics-guided planning for energy-efficient autonomous navigation, particularly for low-latency decision-making.
翻译:基于视觉的目标跟踪是实现自主空中导航的关键组成部分,尤其对于避障任务。受生物视觉启发的神经形态动态视觉传感器(DVS)或事件相机,为传统基于帧的相机提供了一种有前景的替代方案。这些相机能够异步检测强度变化,即使在具有挑战性的光照条件下也能工作,具备高动态范围且抗运动模糊。脉冲神经网络(SNN)正日益被用于高效、异步地处理这些基于事件的信号。同时,基于物理的人工智能(AI)提供了一种通过物理建模将系统级知识融入神经网络的方法,从而增强了鲁棒性、能源效率,并提供了符号可解释性。在本工作中,我们提出了一种用于自主无人机导航的神经形态导航框架,重点在于检测并穿越移动门框同时避免碰撞。我们使用事件相机,通过浅层SNN架构以无监督方式检测运动物体。该方法与一个轻量级、能量感知的物理引导神经网络(PgNN)相结合,该网络利用深度输入进行训练以预测最优飞行时间,从而生成接近最小能耗的路径。该系统在Gazebo模拟器中实现,并集成了一个基于机器人操作系统(ROS)中间件构建的传感器融合视觉到规划神经符号框架。本工作凸显了将基于事件的视觉与物理引导规划相结合,在实现节能自主导航(尤其是低延迟决策)方面的未来潜力。