Model uncertainty is a central challenge in statistical models for binary outcomes such as logistic regression, arising when it is unclear which predictors should be included in the model. Many methods have been proposed to address this issue for logistic regression, but their relative performance under realistic conditions remains poorly understood. We therefore conducted a preregistered, simulation-based comparison of 28 established methods for variable selection and inference under model uncertainty, using 11 empirical datasets spanning a range of sample sizes and number of predictors, in cases both with and without separation. We found that Bayesian model averaging (BMA) methods based on g-priors, particularly g = max(n, p^2), show the strongest overall performance when separation is absent. When separation occurs, penalized likelihood approaches, especially the LASSO, provide the most stable results, while BMA with the local empirical Bayes (EB-local) prior is competitive in both situations. These findings offer practical guidance for applied researchers on how to effectively address model uncertainty in logistic regression in modern empirical and machine learning research.
翻译:模型不确定性是二元结果统计模型(如逻辑回归)中的核心挑战,当无法确定哪些预测变量应纳入模型时便会显现。针对逻辑回归中的这一问题,学界已提出多种方法,但它们在现实条件下的相对性能仍不甚明晰。为此,我们基于11个涵盖不同样本量和预测变量数量的实证数据集,在存在与不存在分离现象的两种情况下,对28种成熟的变量选择与模型不确定性推断方法进行了预注册的仿真比较研究。研究发现:在无分离现象时,基于g先验的贝叶斯模型平均方法(特别是采用g = max(n, p^2)时)表现出最优的综合性能;当存在分离现象时,惩罚似然方法(尤其是LASSO)能提供最稳定的结果,而采用局部经验贝叶斯先验的贝叶斯模型平均方法在两种情况下均具有竞争力。这些发现为应用研究者在现代实证研究与机器学习中如何有效处理逻辑回归的模型不确定性提供了实践指导。