Large-scale datasets have fueled recent advancements in AI-based autonomous vehicle research. However, these datasets are usually collected from a single vehicle's one-time pass of a certain location, lacking multiagent interactions or repeated traversals of the same place. Such information could lead to transformative enhancements in autonomous vehicles' perception, prediction, and planning capabilities. To bridge this gap, in collaboration with the self-driving company May Mobility, we present the MARS dataset which unifies scenarios that enable MultiAgent, multitraveRSal, and multimodal autonomous vehicle research. More specifically, MARS is collected with a fleet of autonomous vehicles driving within a certain geographical area. Each vehicle has its own route and different vehicles may appear at nearby locations. Each vehicle is equipped with a LiDAR and surround-view RGB cameras. We curate two subsets in MARS: one facilitates collaborative driving with multiple vehicles simultaneously present at the same location, and the other enables memory retrospection through asynchronous traversals of the same location by multiple vehicles. We conduct experiments in place recognition and neural reconstruction. More importantly, MARS introduces new research opportunities and challenges such as multitraversal 3D reconstruction, multiagent perception, and unsupervised object discovery. Our data and codes can be found at https://ai4ce.github.io/MARS/.
翻译:大规模数据集推动了基于人工智能的自动驾驶研究的最新进展。然而,这些数据集通常由单一车辆对某一地点的单次通行采集,缺乏多智能体交互或同一地点的重复遍历信息。此类信息可显著提升自动驾驶车辆在感知、预测和规划方面的能力。为弥补这一空白,我们与自动驾驶公司May Mobility合作,提出了MARS数据集,该数据集整合了支持多智能体、多遍历及多模态自动驾驶研究的场景。具体而言,MARS由一组在特定地理区域内行驶的自动驾驶车队采集而成。每辆车拥有独立路线,且不同车辆可能出现在邻近位置。每辆车配备激光雷达和环视RGB摄像头。我们在MARS中整理了两个子集:一个支持多辆车同时出现在同一地点的协同驾驶,另一个通过多辆车对同一地点异步遍历实现记忆回溯。我们在地点识别和神经重建任务上进行了实验。更重要的是,MARS引入了新的研究机遇与挑战,例如多遍历三维重建、多智能体感知和无监督物体发现。我们的数据和代码可在https://ai4ce.github.io/MARS/获取。