We study variable selection (also called support recovery) in high-dimensional sparse linear regression when one has external information on which variables are likely to be associated with the response. Consistent recovery is only possible under somewhat restrictive conditions on sample size, dimension, signal strength, and sparsity. We investigate how these conditions can be relaxed by incorporating said external information. A key application that we consider is structural transfer learning, where variables selected in one or more source datasets are used to guide variable selection in a target dataset. We introduce a family of likelihood penalties that depend on the external information, motivated by connections to Bayesian variable selection. We show that these methods achieve variable selection consistency in regimes where any method ignoring external information fails, and that they achieve consistency at faster rates. We first quantify the potential gains under ideal, oracle-chosen, penalties. We then propose computationally efficient empirical Bayes procedures that learn suitable penalties from the data. We prove that these procedures have improved variable selection properties compared to methods that do not use external information. We illustrate our approach using simulations and a genomics application, where results from mouse experiments are used to inform variable selection for gene expression data in humans.
翻译:本研究探讨了在高维稀疏线性回归中,当存在关于哪些变量可能与响应变量相关的先验外部信息时,如何进行变量选择(亦称支撑恢复)。传统方法仅能在样本量、维度、信号强度和稀疏性等条件受到严格限制时实现一致性恢复。我们研究了如何通过整合所述外部信息来放宽这些限制条件。本文重点关注的关键应用场景是结构化迁移学习——利用在一个或多个源数据集中选择的变量来指导目标数据集中的变量选择。受贝叶斯变量选择理论的启发,我们提出了一类依赖于外部信息的似然惩罚函数族。理论证明表明:在忽略外部信息的任何方法均会失效的机制下,本方法仍能实现变量选择的一致性,且能以更快的收敛速率达到一致性。我们首先量化了在理想条件下(通过神谕方式选择惩罚项)可能获得的增益。随后提出了计算高效的实证贝叶斯流程,该流程能够从数据中学习合适的惩罚函数。我们证明了相较于未使用外部信息的方法,本流程具有更优的变量选择特性。最后通过仿真实验和基因组学应用案例(利用小鼠实验结果为人类基因表达数据的变量选择提供信息)验证了所提方法的有效性。