The widespread use of maximum Jeffreys'-prior penalized likelihood in binomial-response generalized linear models, and in logistic regression, in particular, are supported by the results of Kosmidis and Firth (2021, Biometrika), who show that the resulting estimates are always finite-valued, even in cases where the maximum likelihood estimates are not, which is a practical issue regardless of the size of the data set. In logistic regression, the implied adjusted score equations are formally bias-reducing in asymptotic frameworks with a fixed number of parameters and appear to deliver a substantial reduction in the persistent bias of the maximum likelihood estimator in high-dimensional settings where the number of parameters grows asymptotically as a proportion of the number of observations. In this work, we develop and present two new variants of iteratively reweighted least squares for estimating generalized linear models with adjusted score equations for mean bias reduction and maximization of the likelihood penalized by a positive power of the Jeffreys-prior penalty, which eliminate the requirement of storing $O(n)$ quantities in memory, and can operate with data sets that exceed computer memory or even hard drive capacity. We achieve that through incremental QR decompositions, which enable IWLS iterations to have access only to data chunks of predetermined size. Both procedures can also be readily adapted to fit generalized linear models when distinct parts of the data is stored across different sites and, due to privacy concerns, cannot be fully transferred across sites. We assess the procedures through a real-data application with millions of observations.
翻译:在二项响应广义线性模型,特别是逻辑回归中,最大Jeffreys先验惩罚似然的广泛应用得到了Kosmidis与Firth(2021,Biometrika)研究成果的支持。他们的研究表明,即使在最大似然估计不存在的情况下,由此得到的估计值始终是有限值——这是一个无论数据集规模大小都会存在的实际问题。在逻辑回归中,所隐含的调整得分方程在参数数量固定的渐近框架下具有形式上的偏差降低特性,并且在参数数量随观测数比例渐近增长的高维场景中,该方程能显著降低最大似然估计量存在的持续偏差。本研究针对均值偏差缩减和Jeffreys先验惩罚正幂次似然最大化问题,开发并提出了两种迭代再加权最小二乘法的新变体,用于估计具有调整得分方程的广义线性模型。这些方法消除了存储$O(n)$量级数据的内存需求,能够处理超出计算机内存甚至硬盘容量的数据集。我们通过增量QR分解实现这一目标,使得IWLS迭代仅需访问预定大小的数据块。这两种方法还能轻松适配以下场景:当数据的不同部分存储于不同节点且因隐私限制无法在节点间完整传输时,仍可拟合广义线性模型。我们通过包含数百万观测值的实际数据应用对这些方法进行了评估。