Neural additive model (NAM) is a recently proposed explainable artificial intelligence (XAI) method that utilizes neural network-based architectures. Given the advantages of neural networks, NAMs provide intuitive explanations for their predictions with high model performance. In this paper, we analyze a critical yet overlooked phenomenon: NAMs often produce inconsistent explanations, even when using the same architecture and dataset. Traditionally, such inconsistencies have been viewed as issues to be resolved. However, we argue instead that these inconsistencies can provide valuable explanations within the given data model. Through a simple theoretical framework, we demonstrate that these inconsistencies are not mere artifacts but emerge naturally in datasets with multiple important features. To effectively leverage this information, we introduce a novel framework, Bayesian Neural Additive Model (BayesNAM), which integrates Bayesian neural networks and feature dropout, with theoretical proof demonstrating that feature dropout effectively captures model inconsistencies. Our experiments demonstrate that BayesNAM effectively reveals potential problems such as insufficient data or structural limitations of the model, providing more reliable explanations and potential remedies.
翻译:神经可加模型(NAM)是近期提出的一种可解释人工智能(XAI)方法,其采用基于神经网络的架构。得益于神经网络的优势,NAM在保持高模型性能的同时,能够为预测结果提供直观的解释。本文分析了一个关键但常被忽视的现象:即使使用相同架构和数据集,NAM也经常产生不一致的解释。传统上,这类不一致性被视为需要解决的问题。然而,我们认为这些不一致性实际上能够在给定数据模型内提供有价值的解释。通过一个简单的理论框架,我们证明这些不一致性并非单纯的伪影,而是在具有多个重要特征的数据集中自然产生的。为有效利用这一信息,我们提出了一种新颖的框架——贝叶斯神经可加模型(BayesNAM),该框架融合了贝叶斯神经网络与特征丢弃技术,并通过理论证明表明特征丢弃能有效捕捉模型的不一致性。实验结果表明,BayesNAM能够有效揭示数据不足或模型结构局限等潜在问题,从而提供更可靠的解释及可能的改进方案。