Event cameras are bio-inspired sensors that capture the intensity changes asynchronously and output event streams with distinct advantages, such as high temporal resolution. To exploit event cameras for object/action recognition, existing methods predominantly sample and aggregate events in a second-level duration at every fixed temporal interval (or frequency). However, they often face difficulties in capturing the spatiotemporal relationships for longer, e.g., minute-level, events and generalizing across varying temporal frequencies. To fill the gap, we present a novel framework, dubbed PAST-SSM, exhibiting superior capacity in recognizing events with arbitrary duration (e.g., 0.1s to 4.5s) and generalizing to varying inference frequencies. Our key insight is to learn the spatiotemporal relationships from the encoded event features via the state space model (SSM) -- whose linear complexity makes it ideal for modeling high temporal resolution events with longer sequences. To achieve this goal, we first propose a Path-Adaptive Event Aggregation and Scan (PEAS) module to encode events of varying duration into features with fixed dimensions by adaptively scanning and selecting aggregated event frames. On top of PEAS, we introduce a novel Multi-faceted Selection Guiding (MSG) loss to minimize the randomness and redundancy of the encoded features. This subtly enhances the model generalization across different inference frequencies. Lastly, the SSM is employed to better learn the spatiotemporal properties from the encoded features. Moreover, we build a minute-level event-based recognition dataset, named ArDVS100, with arbitrary duration for the benefit of the community. Extensive experiments prove that our method outperforms prior arts by +3.45%, +0.38% and +8.31% on the DVS Action, SeAct and HARDVS datasets, respectively.
翻译:事件相机是一种仿生传感器,它异步捕获强度变化并输出事件流,具有高时间分辨率等独特优势。为了利用事件相机进行物体/动作识别,现有方法主要在每个固定时间间隔(或频率)内以秒级持续时间采样和聚合事件。然而,这些方法通常难以捕捉更长(例如分钟级)事件的时空关系,并且难以在不同时间频率间泛化。为填补这一空白,我们提出了一种新颖的框架,称为PAST-SSM,该框架在识别任意持续时间(例如0.1秒至4.5秒)的事件以及泛化到不同推理频率方面展现出卓越能力。我们的核心洞见是通过状态空间模型(SSM)从编码的事件特征中学习时空关系——SSM的线性复杂度使其非常适合对具有更长序列的高时间分辨率事件进行建模。为实现这一目标,我们首先提出了一个路径自适应事件聚合与扫描(PEAS)模块,通过自适应扫描和选择聚合的事件帧,将不同持续时间的事件编码为固定维度的特征。在PEAS之上,我们引入了一种新颖的多方面选择引导(MSG)损失,以最小化编码特征的随机性和冗余性。这巧妙地增强了模型在不同推理频率间的泛化能力。最后,采用SSM从编码特征中更好地学习时空特性。此外,我们为社区构建了一个名为ArDVS100的分钟级基于事件的识别数据集,该数据集具有任意持续时间。大量实验证明,我们的方法在DVS Action、SeAct和HARDVS数据集上分别以+3.45%、+0.38%和+8.31%的优势优于现有技术。