In the past decade, although single-robot perception has made significant advancements, the exploration of multi-robot collaborative perception remains largely unexplored. This involves fusing compressed, intermittent, limited, heterogeneous, and asynchronous environmental information across multiple robots to enhance overall perception, despite challenges like sensor noise, occlusions, and sensor failures. One major hurdle has been the lack of real-world datasets. This paper presents a pioneering and comprehensive real-world multi-robot collaborative perception dataset to boost research in this area. Our dataset leverages the untapped potential of air-ground robot collaboration featuring distinct spatial viewpoints, complementary robot mobilities, coverage ranges, and sensor modalities. It features raw sensor inputs, pose estimation, and optional high-level perception annotation, thus accommodating diverse research interests. Compared to existing datasets predominantly designed for Simultaneous Localization and Mapping (SLAM), our setup ensures a diverse range and adequate overlap of sensor views to facilitate the study of multi-robot collaborative perception algorithms. We demonstrate the value of this dataset qualitatively through multiple collaborative perception tasks. We believe this work will unlock the potential research of high-level scene understanding through multi-modal collaborative perception in multi-robot settings.
翻译:在过去十年中,尽管单机器人感知取得了显著进展,但多机器人协同感知的探索在很大程度上仍未得到充分研究。这涉及融合多个机器人之间压缩、间歇、有限、异构且异步的环境信息,以增强整体感知能力,尽管存在传感器噪声、遮挡和传感器故障等挑战。一个主要障碍是缺乏真实世界的数据集。本文提出了一个开创性的、全面的真实世界多机器人协同感知数据集,以推动该领域的研究。我们的数据集利用了空地机器人协作尚未开发的潜力,其特点是具有不同的空间视角、互补的机器人移动性、覆盖范围和传感器模态。它提供原始传感器输入、姿态估计以及可选的高级感知标注,从而适应多样化的研究需求。与主要为同步定位与地图构建(SLAM)设计的现有数据集相比,我们的设置确保了传感器视场的多样性和足够的重叠,以促进多机器人协同感知算法的研究。我们通过多个协同感知任务定性地展示了该数据集的价值。我们相信这项工作将开启在多机器人环境中通过多模态协同感知实现高级场景理解的潜在研究。