In this work, we introduce the first framework for Motion-aware Event Suppression, which learns to filter events triggered by IMOs and ego-motion in real time. Our model jointly segments IMOs in the current event stream while predicting their future motion, enabling anticipatory suppression of dynamic events before they occur. Our lightweight architecture achieves 173 Hz inference on consumer-grade GPUs with less than 1 GB of memory usage, outperforming previous state-of-the-art methods on the challenging EVIMO benchmark by 67\% in segmentation accuracy while operating at a 53\% higher inference rate. Moreover, we demonstrate significant benefits for downstream applications: our method accelerates Vision Transformer inference by 83\% via token pruning and improves event-based visual odometry accuracy, reducing Absolute Trajectory Error (ATE) by 13\%.
翻译:本研究首次提出了运动感知事件抑制框架,该框架能够实时学习过滤由独立运动物体和自身运动触发的事件。我们的模型在分割当前事件流中独立运动物体的同时,还能预测其未来运动,从而实现对动态事件的预见性抑制。该轻量化架构在消费级GPU上实现了173赫兹的推理速度,内存使用量低于1GB,在具有挑战性的EVIMO基准测试中,其分割精度较先前最优方法提升67%,推理速率提高53%。此外,我们证明了该方法对下游应用具有显著优势:通过令牌剪枝将视觉Transformer推理速度提升83%,同时改进基于事件的视觉里程计精度,使绝对轨迹误差降低13%。