Intelligibility and accurate uncertainty estimation are crucial for reliable decision-making. In this paper, we propose EviNAM, an extension of evidential learning that integrates the interpretability of Neural Additive Models (NAMs) with principled uncertainty estimation. Unlike standard Bayesian neural networks and previous evidential methods, EviNAM enables, in a single pass, both the estimation of the aleatoric and epistemic uncertainty as well as explicit feature contributions. Experiments on synthetic and real data demonstrate that EviNAM matches state-of-the-art predictive performance. While we focus on regression, our method extends naturally to classification and generalized additive models, offering a path toward more intelligible and trustworthy predictions.
翻译:可理解性与准确的不确定性估计对于可靠决策至关重要。本文提出EviNAM,该方法将证据学习框架扩展至神经可加模型(NAMs),融合了模型可解释性与理论完备的不确定性估计机制。相较于标准贝叶斯神经网络及现有证据学习方法,EviNAM能够单次前向传播同时实现偶然不确定性与认知不确定性的量化,并提供显式的特征贡献度解析。在合成数据与真实数据上的实验表明,EviNAM在预测性能上达到最先进水平。尽管本文聚焦回归任务,该方法可自然扩展至分类任务及广义可加模型,为实现更具可理解性与可信度的预测提供了新路径。