Fusing different sensor modalities can be a difficult task, particularly if they are asynchronous. Asynchronisation may arise due to long processing times or improper synchronisation during calibration, and there must exist a way to still utilise this previous information for the purpose of safe driving, and object detection in ego vehicle/ multi-agent trajectory prediction. Difficulties arise in the fact that the sensor modalities have captured information at different times and also at different positions in space. Therefore, they are not spatially nor temporally aligned. This paper will investigate the challenge of radar and LiDAR sensors being asynchronous relative to the camera sensors, for various time latencies. The spatial alignment will be resolved before lifting into BEV space via the transformation of the radar/LiDAR point clouds into the new ego frame coordinate system. Only after this can we concatenate the radar/LiDAR point cloud and lifted camera features. Temporal alignment will be remedied for radar data only, we will implement a novel method of inferring the future radar point positions using the velocity information. Our approach to resolving the issue of sensor asynchrony yields promising results. We demonstrate velocity information can drastically improve IoU for asynchronous datasets, as for a time latency of 360 milliseconds (ms), IoU improves from 49.54 to 53.63. Additionally, for a time latency of 550ms, the camera+radar (C+R) model outperforms the camera+LiDAR (C+L) model by 0.18 IoU. This is an advancement in utilising the often-neglected radar sensor modality, which is less favoured than LiDAR for autonomous driving purposes.
翻译:融合不同的传感器模态可能是一项具有挑战性的任务,尤其是在它们异步工作的情况下。异步性可能源于较长的处理时间或校准过程中的不当同步,但为了安全驾驶以及自车/多智能体轨迹预测中的目标检测,必须存在一种方法能够有效利用这些历史信息。困难在于,不同传感器模态在不同时间点、不同空间位置捕获了信息,因此它们在空间和时间上均未对齐。本文研究了雷达和LiDAR传感器相对于相机传感器存在不同时间延迟时的异步性挑战。在通过将雷达/LiDAR点云转换到新的自车坐标系以提升至鸟瞰图空间之前,将首先解决空间对齐问题。只有在此之后,才能将雷达/LiDAR点云与提升后的相机特征进行拼接。对于时间对齐,我们将仅针对雷达数据实施一种新颖的方法,即利用速度信息推断未来雷达点的位置。我们解决传感器异步性问题的方法取得了有希望的结果。实验表明,速度信息能显著提升异步数据集的交并比:在360毫秒的时间延迟下,交并比从49.54提升至53.63。此外,在550毫秒延迟下,相机+雷达模型以0.18的交并比优势超越了相机+LiDAR模型。这项研究推动了常被忽视的雷达传感器模态的应用,该模态在自动驾驶领域通常不如LiDAR受青睐。