We propose a fast and simple explainable AI (XAI) method for point cloud data. It computes pointwise importance with respect to a trained network downstream task. This allows better understanding of the network properties, which is imperative for safety-critical applications. In addition to debugging and visualization, our low computational complexity facilitates online feedback to the network at inference. This can be used to reduce uncertainty and to increase robustness. In this work, we introduce \emph{Feature Based Interpretability} (FBI), where we compute the features' norm, per point, before the bottleneck. We analyze the use of gradients and post- and pre-bottleneck strategies, showing pre-bottleneck is preferred, in terms of smoothness and ranking. We obtain at least three orders of magnitude speedup, compared to current XAI methods, thus, scalable for big point clouds or large-scale architectures. Our approach achieves SOTA results, in terms of classification explainability. We demonstrate how the proposed measure is helpful in analyzing and characterizing various aspects of 3D learning, such as rotation invariance, robustness to out-of-distribution (OOD) outliers or domain shift and dataset bias.
翻译:我们提出了一种用于点云数据的快速简单可解释人工智能(XAI)方法。该方法针对训练好的网络下游任务计算逐点重要性,从而更好地理解网络特性——这对安全关键型应用至关重要。除调试与可视化外,我们方法的低计算复杂度可在推理时为网络提供在线反馈,这有助于降低不确定性并提升鲁棒性。本文引入了一种名为"基于特征的可解释性"(Feature Based Interpretability,FBI)方法,通过计算瓶颈层前每个点的特征范数。我们分析了梯度利用策略以及后瓶颈与预瓶颈两种策略的比较,结果表明预瓶颈策略在平滑性与排序方面更具优势。与现有XAI方法相比,我们实现了至少三个数量级的加速,因此可扩展至大规模点云或大型架构。在分类可解释性方面,我们的方法达到了当前最优(SOTA)水平。我们还展示了该度量如何有助于分析与表征三维学习的多方面特性,包括旋转不变性、对分布外(OOD)异常值的鲁棒性、领域偏移处理以及数据集偏差等。