Bio-inspired neuromorphic cameras asynchronously record pixel brightness changes and generate sparse event streams. They can capture dynamic scenes with little motion blur and more details in extreme illumination conditions. Due to the multidimensional address-event structure, most existing vision algorithms cannot properly handle asynchronous event streams. While several event representations and processing methods have been developed to address such an issue, they are typically driven by a large number of events, leading to substantial overheads in runtime and memory. In this paper, we propose a new graph representation of the event data and couple it with a Graph Transformer to perform accurate neuromorphic classification. Extensive experiments show that our approach leads to better results and excels at the challenging realistic situations where only a small number of events and limited computational resources are available, paving the way for neuromorphic applications embedded into mobile facilities.
翻译:生物启发的神经形态相机异步记录像素亮度变化并生成稀疏事件流。它们能够捕捉动态场景中的微小运动模糊,并在极端光照条件下呈现更多细节。由于多维地址-事件结构的存在,大多数现有视觉算法无法正确处理异步事件流。尽管已开发出多种事件表征与处理方法来解决该问题,但这些方法通常依赖大量事件驱动,导致运行时与内存开销显著。本文提出一种新的事件数据图表示方法,并将其与图Transformer相结合,实现精确的神经形态分类。大量实验表明,本方法在仅需少量事件和有限计算资源的严峻现实场景中,能取得更优结果,为嵌入移动设备的神经形态应用铺平道路。