'A trustworthy representation of uncertainty is desirable and should be considered as a key feature of any machine learning method' (Huellermeier and Waegeman, 2021). This conclusion of Huellermeier et al. underpins the importance of calibrated uncertainties. Since AI-based algorithms are heavily impacted by dataset shifts, the automotive industry needs to safeguard its system against all possible contingencies. One important but often neglected dataset shift is caused by optical aberrations induced by the windshield. For the verification of the perception system performance, requirements on the AI performance need to be translated into optical metrics by a bijective mapping (Braun, 2023). Given this bijective mapping it is evident that the optical system characteristics add additional information about the magnitude of the dataset shift. As a consequence, we propose to incorporate a physical inductive bias into the neural network calibration architecture to enhance the robustness and the trustworthiness of the AI target application, which we demonstrate by using a semantic segmentation task as an example. By utilizing the Zernike coefficient vector of the optical system as a physical prior we can significantly reduce the mean expected calibration error in case of optical aberrations. As a result, we pave the way for a trustworthy uncertainty representation and for a holistic verification strategy of the perception chain.
翻译:“可信的不确定性表征是可取的,应被视为任何机器学习方法的关键特征”(Huellermeier 和 Waegeman,2021)。Huellermeier 等人的这一结论强调了校准不确定性的重要性。由于基于 AI 的算法受数据集偏移的影响很大,汽车行业需要保护其系统免受所有可能的意外情况影响。一个重要但常被忽视的数据集偏移是由挡风玻璃引起的光学像差所致。为验证感知系统性能,需要将 AI 性能要求通过双射映射转换为光学指标(Braun,2023)。鉴于这种双射映射,光学系统特性显然提供了关于数据集偏移幅度的额外信息。因此,我们建议将物理归纳偏置融入神经网络校准架构,以增强 AI 目标应用的鲁棒性和可信度,并以语义分割任务为例进行演示。通过利用光学系统的 Zernike 系数向量作为物理先验,我们能在光学像差情况下显著降低平均预期校准误差。由此,我们为可信的不确定性表征和感知链的整体验证策略铺平了道路。