It is well known that Bridge regression enjoys superior theoretical properties when compared to traditional LASSO. However, the current latent variable representation of its Bayesian counterpart, based on the exponential power prior, is computationally expensive in higher dimensions. In this paper, we show that the exponential power prior has a closed form scale mixture of normal decomposition for $\alpha=(\frac{1}{2})^\gamma, \gamma \in \{1, 2,\ldots\}$. We call these types of priors $L_{\frac{1}{2}}$ prior for short. We develop an efficient partially collapsed Gibbs sampling scheme for computation using the $L_{\frac{1}{2}}$ prior and study theoretical properties when $p>n$. In addition, we introduce a non-separable Bridge penalty function inspired by the fully Bayesian formulation and a novel, efficient coordinate descent algorithm. We prove the algorithm's convergence and show that the local minimizer from our optimisation algorithm has an oracle property. Finally, simulation studies were carried out to illustrate the performance of the new algorithms. Supplementary materials for this article are available online.
翻译:众所周知,与传统的LASSO相比,桥回归具有更优越的理论性质。然而,其基于指数幂先验的贝叶斯对应模型,在现有潜变量表示下,于高维情形中计算成本高昂。本文证明,对于$\alpha=(\frac{1}{2})^\gamma, \gamma \in \{1, 2,\ldots\}$,指数幂先验具有闭式的正态尺度混合分解。我们将此类先验简称为$L_{\frac{1}{2}}$先验。我们针对$L_{\frac{1}{2}}$先验开发了一种高效的部分折叠吉布斯采样计算方案,并研究了$p>n$情形下的理论性质。此外,受全贝叶斯框架启发,我们引入了一种非可分离的桥惩罚函数,并提出一种新颖高效的正则下降算法。我们证明了该算法的收敛性,并表明优化算法得到的局部极小解具有预言机性质。最后,通过模拟研究展示了新算法的性能。本文的补充材料可在线获取。