We propose a fast and simple explainable AI (XAI) method for point cloud data. It computes pointwise importance with respect to a trained network downstream task. This allows better understanding of the network properties, which is imperative for safety-critical applications. In addition to debugging and visualization, our low computational complexity facilitates online feedback to the network at inference. This can be used to reduce uncertainty and to increase robustness. In this work, we introduce \emph{Feature Based Interpretability} (FBI), where we compute the features' norm, per point, before the bottleneck. We analyze the use of gradients and post- and pre-bottleneck strategies, showing pre-bottleneck is preferred, in terms of smoothness and ranking. We obtain at least three orders of magnitude speedup, compared to current XAI methods, thus, scalable for big point clouds or large-scale architectures. Our approach achieves SOTA results, in terms of classification explainability. We demonstrate how the proposed measure is helpful in analyzing and characterizing various aspects of 3D learning, such as rotation invariance, robustness to out-of-distribution (OOD) outliers or domain shift and dataset bias.
翻译:我们提出了一种针对点云数据的快速且简单的可解释人工智能(XAI)方法。该方法针对训练好的网络下游任务计算逐点重要性,从而更好地理解网络属性,这对于安全关键型应用至关重要。除了调试与可视化,该方法的低计算复杂度还能在推理过程中为网络提供在线反馈,从而降低不确定性并增强鲁棒性。在本工作中,我们引入了基于瓶颈前特征的可解释性(FBI):计算每个点在瓶颈层前的特征范数。通过分析梯度策略及瓶颈后/瓶颈前策略,我们证明瓶颈前策略在平滑性和排序方面更优。与现有XAI方法相比,我们实现了至少三个数量级的加速,因此可扩展至大尺度点云或大规模架构。该方法在分类可解释性方面达到了当前最优(SOTA)水平。我们进一步展示了该度量如何帮助分析与表征三维学习的多个方面,包括旋转不变性、对分布外(OOD)异常值或领域偏移的鲁棒性以及数据集偏差等特性。