Since DNN is vulnerable to carefully crafted adversarial examples, adversarial attack on LiDAR sensors have been extensively studied. We introduce a robust black-box attack dubbed LiDAttack. It utilizes a genetic algorithm with a simulated annealing strategy to strictly limit the location and number of perturbation points, achieving a stealthy and effective attack. And it simulates scanning deviations, allowing it to adapt to dynamic changes in real world scenario variations. Extensive experiments are conducted on 3 datasets (i.e., KITTI, nuScenes, and self-constructed data) with 3 dominant object detection models (i.e., PointRCNN, PointPillar, and PV-RCNN++). The results reveal the efficiency of the LiDAttack when targeting a wide range of object detection models, with an attack success rate (ASR) up to 90%.
翻译:由于深度神经网络易受精心构造的对抗样本攻击,针对激光雷达传感器的对抗攻击已得到广泛研究。本文提出一种名为LiDAttack的鲁棒黑盒攻击方法。该方法采用结合模拟退火策略的遗传算法,严格限制扰动点的位置与数量,实现隐蔽且高效的攻击。同时,该方法通过模拟扫描偏差,使其能够适应现实场景动态变化。我们在三个数据集(即KITTI、nuScenes及自建数据)上对三种主流目标检测模型(即PointRCNN、PointPillar与PV-RCNN++)进行了大量实验。结果表明,LiDAttack在针对多种目标检测模型时具有高效性,攻击成功率最高可达90%。