Event-cameras have emerged as a revolutionary technology with a high temporal resolution that far surpasses standard active pixel cameras. This technology draws biological inspiration from photoreceptors and the initial retinal synapse. This research showcases the potential of additional retinal functionalities to extract visual features. We provide a domain-agnostic and efficient algorithm for ego-motion compensation based on Object Motion Sensitivity (OMS), one of the multiple features computed within the mammalian retina. We develop a method based on experimental neuroscience that translates OMS' biological circuitry to a low-overhead algorithm to suppress camera motion bypassing the need for deep networks and learning. Our system processes event data from dynamic scenes to perform pixel-wise object motion segmentation using a real and synthetic dataset. This paper introduces a bio-inspired computer vision method that dramatically reduces the number of parameters by $\text{10}^\text{3}$ to $\text{10}^\text{6}$ orders of magnitude compared to previous approaches. Our work paves the way for robust, high-speed, and low-bandwidth decision-making for in-sensor computations.
翻译:事件相机作为一种革命性技术已经出现,其时间分辨率远超标准有源像素相机。该技术从光感受器和初始视网膜突触中汲取了生物学灵感。本研究展示了利用更多视网膜功能来提取视觉特征的潜力。我们基于物体运动敏感性(OMS)——哺乳动物视网膜内计算的多种特征之一——提出了一种与领域无关且高效的自运动补偿算法。我们基于实验神经科学开发了一种方法,将OMS的生物电路转化为一种低开销算法,以抑制相机运动,从而绕过了对深度网络和学习的依赖。我们的系统处理来自动态场景的事件数据,使用真实和合成数据集执行像素级物体运动分割。本文介绍了一种仿生计算机视觉方法,与先前方法相比,其参数数量大幅减少了$\text{10}^\text{3}$到$\text{10}^\text{6}$个数量级。我们的工作为传感器内计算实现鲁棒、高速和低带宽的决策铺平了道路。