We explore the estimation of generalized additive models using basis expansion in conjunction with Bayesian model selection. Although Bayesian model selection is useful for regression splines, it has traditionally been applied mainly to Gaussian regression owing to the availability of a tractable marginal likelihood. We extend this method to handle an exponential family of distributions by using the Laplace approximation of the likelihood. Although this approach works well with any Gaussian prior distribution, consensus has not been reached on the best prior for nonparametric regression with basis expansions. Our investigation indicates that the classical unit information prior may not be ideal for nonparametric regression. Instead, we find that mixtures of g-priors are more effective. We evaluate various mixtures of g-priors to assess their performance in estimating generalized additive models. Additionally, we compare several priors for knots to determine the most effective strategy. Our simulation studies demonstrate that model selection-based approaches outperform other Bayesian methods.
翻译:本文探讨了结合基展开与贝叶斯模型选择的广义可加模型估计方法。虽然贝叶斯模型选择在回归样条中具有实用价值,但由于可处理的边际似然函数的可获得性,传统上主要应用于高斯回归。我们通过使用似然函数的拉普拉斯近似,将该方法扩展至指数族分布的处理。尽管该方法可与任何高斯先验分布结合使用,但对于基展开的非参数回归,最佳先验选择尚未形成共识。研究表明,经典的单位信息先验可能不适用于非参数回归场景。相反,我们发现g-先验混合方法具有更优效果。我们评估了多种g-先验混合方案在广义可加模型估计中的性能表现,同时比较了多种节点先验以确定最优策略。仿真实验表明,基于模型选择的方法优于其他贝叶斯方法。