We present DeepIPCv2, an autonomous driving model that perceives the environment using a LiDAR sensor for more robust drivability, especially when driving under poor illumination conditions where everything is not clearly visible. DeepIPCv2 takes a set of LiDAR point clouds as the main perception input. Since point clouds are not affected by illumination changes, they can provide a clear observation of the surroundings no matter what the condition is. This results in a better scene understanding and stable features provided by the perception module to support the controller module in estimating navigational control properly. To evaluate its performance, we conduct several tests by deploying the model to predict a set of driving records and perform real automated driving under three different conditions. We also conduct ablation and comparative studies with some recent models to justify its performance. Based on the experimental results, DeepIPCv2 shows a robust performance by achieving the best drivability in all driving scenarios. Furthermore, to support future research, we will upload the codes and data to https://github.com/oskarnatan/DeepIPCv2.
翻译:我们提出DeepIPCv2,一种利用LiDAR传感器感知环境的自动驾驶模型,以在照明条件恶劣、所有物体难以清晰可见的情况下实现更鲁棒的驾驶性能。DeepIPCv2将一组LiDAR点云作为主要感知输入。由于点云不受照明变化影响,无论环境条件如何,它都能提供清晰的周围环境观测。这使得感知模块能够获得更优的场景理解和稳定的特征,从而支持控制模块准确估计导航控制策略。为评估其性能,我们部署该模型预测多组驾驶记录,并在三种不同条件下执行实际自动驾驶测试。同时,我们与近期相关模型进行消融研究和对比实验以验证其性能。实验结果表明,DeepIPCv2在所有驾驶场景中均展现出最佳驾驶性能,具备鲁棒的驾驶能力。此外,为支持后续研究,我们将代码与数据集上传至https://github.com/oskarnatan/DeepIPCv2。