Object detectors are widely used in safety-critical real-time applications such as autonomous driving. Explainability is especially important for safety-critical applications, and due to the variety of object detectors and their often proprietary nature, black-box explainability tools are needed. However, existing black-box explainability tools for AI models rely on multiple model calls, rendering them impractical for real-time use. In this paper, we introduce IncX, an algorithm and a tool for real-time black-box explainability for object detectors. The algorithm is based on linear transformations of saliency maps, producing sufficient explanations. We evaluate our implementation on four widely used video datasets of autonomous driving and demonstrate that IncX's explanations are comparable in quality to the state-of-the-art and are computed two orders of magnitude faster than the state-of-the-art, making them usable in real time.
翻译:目标检测器广泛应用于自动驾驶等安全关键型实时应用。可解释性对于安全关键型应用尤为重要,且鉴于目标检测器种类繁多且常具专有属性,需要黑盒可解释性工具。然而,现有针对AI模型的黑盒可解释性工具依赖多次模型调用,导致其难以满足实时性需求。本文提出IncX算法及工具,用于实现目标检测器的实时黑盒可解释。该算法基于显著图的线性变换生成充分解释。我们在四个广泛使用的自动驾驶视频数据集上评估实现效果,证明IncX的解释质量与现有最优方法相当,且计算速度比现有最优方法快两个数量级,能够满足实时性要求。