Significance testing aims to determine whether a proposition about the population distribution is the truth or not given observations. However, traditional significance testing often needs to derive the distribution of the testing statistic, failing to deal with complex nonlinear relationships. In this paper, we propose to conduct Full Bayesian Significance Testing for neural networks, called \textit{n}FBST, to overcome the limitation in relationship characterization of traditional approaches. A Bayesian neural network is utilized to fit the nonlinear and multi-dimensional relationships with small errors and avoid hard theoretical derivation by computing the evidence value. Besides, \textit{n}FBST can test not only global significance but also local and instance-wise significance, which previous testing methods don't focus on. Moreover, \textit{n}FBST is a general framework that can be extended based on the measures selected, such as Grad-\textit{n}FBST, LRP-\textit{n}FBST, DeepLIFT-\textit{n}FBST, LIME-\textit{n}FBST. A range of experiments on both simulated and real data are conducted to show the advantages of our method.
翻译:显著性检验旨在根据观测数据判断关于总体分布的命题是否成立。然而,传统显著性检验通常需要推导检验统计量的分布,难以处理复杂的非线性关系。本文提出针对神经网络的完全贝叶斯显著性检验方法(称为nFBST),以克服传统方法在关系刻画上的局限性。通过采用贝叶斯神经网络以较小误差拟合非线性和多维关系,并借助计算证据值避免繁琐的理论推导。此外,nFBST不仅能检验全局显著性,还能检验局部和实例级显著性——这是以往检验方法尚未关注的。更重要的是,nFBST是一个通用框架,可基于所选度量进行扩展,例如Grad-nFBST、LRP-nFBST、DeepLIFT-nFBST、LIME-nFBST。我们在模拟数据和真实数据上开展了一系列实验,以展示本方法的优势。