Autonomous driving has the potential to significantly enhance productivity and provide numerous societal benefits. Ensuring robustness in these safety-critical systems is essential, particularly when vehicles must navigate adverse weather conditions and sensor corruptions that may not have been encountered during training. Current methods often overlook uncertainties arising from adversarial conditions or distributional shifts, limiting their real-world applicability. We propose an efficient adaptation of an uncertainty estimation technique for 3D occupancy prediction. Our method dynamically calibrates model confidence using epistemic uncertainty estimates. Our evaluation under various camera corruption scenarios, such as fog or missing cameras, demonstrates that our approach effectively quantifies epistemic uncertainty by assigning higher uncertainty values to unseen data. We introduce region-specific corruptions to simulate defects affecting only a single camera and validate our findings through both scene-level and region-level assessments. Our results show superior performance in Out-of-Distribution (OoD) detection and confidence calibration compared to common baselines such as Deep Ensembles and MC-Dropout. Our approach consistently demonstrates reliable uncertainty measures, indicating its potential for enhancing the robustness of autonomous driving systems in real-world scenarios. Code and dataset are available at https://github.com/ika-rwth-aachen/OCCUQ .
翻译:自动驾驶有潜力显著提升生产力并带来诸多社会效益。确保这些安全关键系统的鲁棒性至关重要,尤其是在车辆必须应对训练期间可能未遇到的恶劣天气条件和传感器损坏时。现有方法常常忽视由对抗性条件或分布偏移引起的不确定性,限制了其在实际场景中的应用。我们提出了一种针对3D占据预测的不确定性估计技术的高效适配方法。我们的方法利用认知不确定性估计动态校准模型置信度。我们在多种相机损坏场景(如雾或相机缺失)下的评估表明,我们的方法通过为未见数据分配更高的不确定性值,有效地量化了认知不确定性。我们引入了区域特定损坏来模拟仅影响单个相机的缺陷,并通过场景级和区域级评估验证了我们的发现。与常见基线方法(如深度集成和MC-Dropout)相比,我们的方法在分布外检测和置信度校准方面表现出更优的性能。我们的方法始终展现出可靠的不确定性度量,表明其有潜力在真实世界场景中增强自动驾驶系统的鲁棒性。代码和数据集可在 https://github.com/ika-rwth-aachen/OCCUQ 获取。