With the increasing demand for interpretability in machine learning, functional ANOVA decomposition has gained renewed attention as a principled tool for breaking down high-dimensional function into low-dimensional components that reveal the contributions of different variable groups. Recently, Tensor Product Neural Network (TPNN) has been developed and applied as basis functions in the functional ANOVA model, referred to as ANOVA-TPNN. A disadvantage of ANOVA-TPNN, however, is that the components to be estimated must be specified in advance, which makes it difficult to incorporate higher-order TPNNs into the functional ANOVA model due to computational and memory constraints. In this work, we propose Bayesian-TPNN, a Bayesian inference procedure for the functional ANOVA model with TPNN basis functions, enabling the detection of higher-order components with reduced computational cost compared to ANOVA-TPNN. We develop an efficient MCMC algorithm and demonstrate that Bayesian-TPNN performs well by analyzing multiple benchmark datasets. Theoretically, we prove that the posterior of Bayesian-TPNN is consistent.
翻译:随着机器学习可解释性需求的日益增长,函数方差分析分解作为一种原理性工具重新受到关注,它能够将高维函数分解为低维分量,从而揭示不同变量组的贡献。最近,张量积神经网络被开发并作为基函数应用于函数方差分析模型,称为ANOVA-TPNN。然而,ANOVA-TPNN的一个缺点是需要预先指定待估计的分量,由于计算和内存限制,这使得将高阶TPNN纳入函数方差分析模型变得困难。在本工作中,我们提出了Bayesian-TPNN,这是一种针对采用TPNN基函数的函数方差分析模型的贝叶斯推断方法,与ANOVA-TPNN相比,能以更低的计算成本检测高阶分量。我们开发了一种高效的MCMC算法,并通过分析多个基准数据集证明Bayesian-TPNN表现良好。理论上,我们证明了Bayesian-TPNN的后验分布具有一致性。