The uncertainty quantification of prediction models (e.g., neural networks) is crucial for their adoption in many robotics applications. This is arguably as important as making accurate predictions, especially for safety-critical applications such as self-driving cars. This paper proposes our approach to uncertainty quantification in the context of visual localization for autonomous driving, where we predict locations from images. Our proposed framework estimates probabilistic uncertainty by creating a sensor error model that maps an internal output of the prediction model to the uncertainty. The sensor error model is created using multiple image databases of visual localization, each with ground-truth location. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting, weather (sunny, snowy, night), and alignment errors between databases. We analyze both the predicted uncertainty and its incorporation into a Kalman-based localization filter. Our results show that prediction error variations increase with poor weather and lighting condition, leading to greater uncertainty and outliers, which can be predicted by our proposed uncertainty model. Additionally, our probabilistic error model enables the filter to remove ad hoc sensor gating, as the uncertainty automatically adjusts the model to the input data
翻译:预测模型(如神经网络)的不确定性量化对于其在许多机器人应用中的部署至关重要。这与做出准确预测同等重要,尤其是在安全关键型应用(如自动驾驶汽车)中。本文提出了一种针对自动驾驶视觉定位场景的不确定性量化方法,其中我们从图像中预测位置。我们的框架通过建立传感器误差模型来估计概率不确定性,该模型将预测模型的内部输出映射到不确定性。该传感器误差模型使用多个视觉定位图像数据库构建,每个数据库均包含真实位置。我们利用Ithaca365数据集验证了不确定性预测框架的准确性,该数据集包含光照变化、天气变化(晴天、雪天、夜间)以及数据库之间的对齐误差。我们分析了预测的不确定性及其在基于卡尔曼的定位滤波器中的融合效果。结果表明,恶劣天气和弱光条件下预测误差波动增大,导致更大的不确定性和离群值,而我们的不确定性模型能够预测这些现象。此外,我们的概率误差模型使滤波器能够消除针对特定传感器的门控机制,因为不确定性会根据输入数据自动调整模型。