Egomotion estimation is crucial for applications such as autonomous navigation and robotics, where accurate and real-time motion tracking is required. However, traditional methods relying on inertial sensors are highly sensitive to external conditions, and suffer from drifts leading to large inaccuracies over long distances. Vision-based methods, particularly those utilising event-based vision sensors, provide an efficient alternative by capturing data only when changes are perceived in the scene. This approach minimises power consumption while delivering high-speed, low-latency feedback. In this work, we propose a fully event-based pipeline for egomotion estimation that processes the event stream directly within the event-based domain. This method eliminates the need for frame-based intermediaries, allowing for low-latency and energy-efficient motion estimation. We construct a shallow spiking neural network using a synaptic gating mechanism to convert precise event timing into bursts of spikes. These spikes encode local optical flow velocities, and the network provides an event-based readout of egomotion. We evaluate the network's performance on a dedicated chip, demonstrating strong potential for low-latency, low-power motion estimation. Additionally, simulations of larger networks show that the system achieves state-of-the-art accuracy in egomotion estimation tasks with event-based cameras, making it a promising solution for real-time, power-constrained robotics applications.
翻译:自运动估计在自主导航与机器人等需要实时精确运动追踪的应用中至关重要。然而,依赖惯性传感器的传统方法对外部条件极为敏感,且存在漂移问题,导致长距离运动后产生较大误差。基于视觉的方法,特别是利用事件视觉传感器的技术,通过仅在场景变化时捕获数据,提供了一种高效的替代方案。该方法在实现高速、低延迟反馈的同时,显著降低了功耗。本研究提出了一种完全基于事件的自运动估计流程,直接在事件域内处理事件流。该方法无需基于帧的中间表示,实现了低延迟与高能效的运动估计。我们构建了一个采用突触门控机制的浅层脉冲神经网络,将精确的事件时序转换为脉冲爆发序列。这些脉冲编码了局部光流速度,网络进而提供基于事件的自运动读出。我们在专用芯片上评估了网络的性能,结果显示出其在低延迟、低功耗运动估计方面的巨大潜力。此外,更大规模网络的仿真表明,该系统在使用事件相机进行自运动估计任务时达到了最先进的精度水平,为实时、功耗受限的机器人应用提供了一种极具前景的解决方案。