Event cameras operate fundamentally differently from traditional Active Pixel Sensor (APS) cameras, offering significant advantages. Recent research has developed simulators to convert video frames into events, addressing the shortage of real event datasets. Current simulators primarily focus on the logical behavior of event cameras. However, the fundamental analogue properties of pixel circuits are seldom considered in simulator design. The gap between analogue pixel circuit and discrete video frames causes the degeneration of synthetic events, particularly in high-contrast scenes. In this paper, we propose a novel method of generating reliable event data based on a detailed analysis of the pixel circuitry in event cameras. We incorporate the analogue properties of event camera pixel circuits into the simulator design: (1) analogue filtering of signals from light intensity to events, and (2) a cutoff frequency that is independent of video frame rate. Experimental results on two relevant tasks, including semantic segmentation and image reconstruction, validate the reliability of simulated event data, even in high-contrast scenes. This demonstrates that deep neural networks exhibit strong generalization from simulated to real event data, confirming that the synthetic events generated by the proposed method are both realistic and well-suited for effective training.
翻译:事件相机与传统有源像素传感器(APS)相机的工作原理存在根本性差异,并具备显著优势。近期研究已开发出将视频帧转换为事件的模拟器,以应对真实事件数据集的短缺。现有模拟器主要关注事件相机的逻辑行为,然而模拟器设计中很少考虑像素电路的基本模拟特性。模拟像素电路与离散视频帧之间的鸿沟导致合成事件质量下降,在高对比度场景中尤为明显。本文基于对事件相机像素电路的详细分析,提出一种生成可靠事件数据的新方法。我们将事件相机像素电路的模拟特性融入模拟器设计:(1)从光强到事件的信号模拟滤波;(2)与视频帧率无关的截止频率。在语义分割和图像重建两项相关任务上的实验结果验证了模拟事件数据的可靠性,即使在高对比度场景中亦然。这表明深度神经网络从模拟事件数据到真实事件数据具有强大的泛化能力,证实了所提方法生成的合成事件既真实又非常适合有效训练。