Autonomous driving on water surfaces plays an essential role in executing hazardous and time-consuming missions, such as maritime surveillance, survivors rescue, environmental monitoring, hydrography mapping and waste cleaning. This work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset for autonomous driving on water surfaces. Equipped with a 4D radar and a monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather solutions for discerning object-related information, including color, shape, texture, range, velocity, azimuth, and elevation. Focusing on typical static and dynamic objects on water surfaces, we label the camera images and radar point clouds at pixel-level and point-level, respectively. In addition to basic perception tasks, such as object detection, instance segmentation and semantic segmentation, we also provide annotations for free-space segmentation and waterline segmentation. Leveraging the multi-task and multi-modal data, we conduct benchmark experiments on the uni-modality of radar and camera, as well as the fused modalities. Experimental results demonstrate that 4D radar-camera fusion can considerably improve the accuracy and robustness of perception on water surfaces, especially in adverse lighting and weather conditions. WaterScenes dataset is public on https://waterscenes.github.io.
翻译:水面自动驾驶在执行危险且耗时的任务中发挥着至关重要的作用,例如海事监视、幸存者救援、环境监测、水文测绘和垃圾清理。本研究提出了WaterScenes,这是首个面向水面自动驾驶的多任务4D雷达-相机融合数据集。我们的无人水面艇(USV)搭载4D雷达和单目相机,为识别物体相关信息(包括颜色、形状、纹理、距离、速度、方位角和仰角)提供全天候解决方案。针对水面典型的静态和动态物体,我们分别在像素级和点级对相机图像与雷达点云进行标注。除物体检测、实例分割和语义分割等基本感知任务外,我们还提供了可行驶区域分割与水线分割的标注。利用多任务与多模态数据,我们分别对雷达单模态、相机单模态以及融合模态进行了基准实验。实验结果表明,4D雷达-相机融合能显著提升水面感知的准确性与鲁棒性,尤其在恶劣光照与天气条件下。WaterScenes数据集已公开于https://waterscenes.github.io。