Dynamic Vision Sensors (DVS) offer a unique advantage in control applications due to their high temporal resolution and asynchronous event-based data. Still, their adoption in machine learning algorithms remains limited. To address this gap and promote the development of models that leverage the specific characteristics of DVS data, we introduce the MMDVS-LF: Multi-Modal Dynamic Vision Sensor and Eye-Tracking Dataset for Line Following. This comprehensive dataset is the first to integrate multiple sensor modalities, including DVS recordings and eye-tracking data from a small-scale standardized vehicle. Additionally, the dataset includes RGB video, odometry, Inertial Measurement Unit (IMU) data, and demographic data of drivers performing a Line Following. With its diverse range of data, MMDVS-LF opens new opportunities for developing event-based deep learning algorithms just like the MNIST dataset did for Convolutional Neural Networks.
翻译:动态视觉传感器(DVS)因其高时间分辨率和基于事件的异步数据特性,在控制应用中具有独特优势。然而,其在机器学习算法中的应用仍然有限。为弥补这一空白,并推动利用DVS数据特性的模型发展,我们提出了MMDVS-LF:用于线路跟随的多模态动态视觉传感器与眼动追踪数据集。该综合性数据集首次整合了多种传感器模态,包括来自小型标准化车辆的DVS记录和眼动追踪数据。此外,数据集还包含执行线路跟随任务的驾驶员的RGB视频、里程计、惯性测量单元(IMU)数据以及人口统计信息。凭借其多样化的数据范围,MMDVS-LF为开发基于事件的深度学习算法提供了新的机遇,正如MNIST数据集对卷积神经网络(CNN)发展所起到的推动作用。