This paper introduces the first publicly accessible labeled multi-modal perception dataset for autonomous maritime navigation, focusing on in-water obstacles within the aquatic environment to enhance situational awareness for Autonomous Surface Vehicles (ASVs). This dataset, collected over 4 years and consisting of diverse objects encountered under varying environmental conditions, aims to bridge the research gap in ASVs by providing a multi-modal, annotated, and ego-centric perception dataset, for object detection and classification. We also show the applicability of the proposed dataset by training and testing current deep learning-based open-source perception algorithms that have shown success in the autonomous ground vehicle domain. With the training and testing results, we discuss open challenges for existing datasets and methods, identifying future research directions. We expect that our dataset will contribute to the development of future marine autonomy pipelines and marine (field) robotics. This dataset is open source and found at https://seepersea.github.io/.
翻译:本文介绍了首个公开可用的标记多模态感知数据集,专注于增强自主水面艇在水生环境中对水中障碍物的态势感知能力。该数据集历时四年采集,包含不同环境条件下遇到的多种物体,旨在通过提供多模态、带标注、以自身为中心的感知数据集,弥补自主水面艇在物体检测与分类领域的研究空白。我们通过训练和测试当前在自主地面车辆领域已取得成功的基于深度学习的开源感知算法,验证了所提出数据集的适用性。结合训练与测试结果,我们探讨了现有数据集与方法面临的开放挑战,并指出了未来研究方向。我们期望本数据集能为未来海洋自主系统流水线及海洋(野外)机器人学的发展作出贡献。本数据集已开源,访问地址为 https://seepersea.github.io/。