Estimation in exploratory factor analysis often yields estimates on the boundary of the parameter space. Such occurrences, known as Heywood cases, are characterised by non-positive variance estimates and can cause issues in numerical optimisation procedures or convergence failures, which, in turn, can lead to misleading inferences, particularly regarding factor scores and model selection. We derive sufficient conditions on the model and a penalty to the log-likelihood function that i) guarantee the existence of maximum penalised likelihood estimates in the interior of the parameter space, and ii) ensure that the corresponding estimators possess the desirable asymptotic properties expected by the maximum likelihood estimator, namely consistency and asymptotic normality. Consistency and asymptotic normality are achieved when the penalisation is soft enough, in a way that adapts to the information accumulation about the model parameters. We formally show, for the first time, that the penalties of Akaike (1987) and Hirose et al. (2011) to the log-likelihood of the normal linear factor model satisfy the conditions for existence, and, hence, deal with Heywood cases. Their vanilla versions, though, can result in questionable finite-sample properties in estimation, inference, and model selection. The maximum softly-penalised likelihood framework we introduce enables the careful scaling of those penalties to ensure that the resulting estimation and inference procedures are asymptotically optimal. Through comprehensive simulation studies and the analysis of real data sets, we illustrate the desirable finite-sample properties of the maximum softly penalised likelihood estimators and associated procedures.
翻译:探索性因子分析中的估计常常导致参数空间边界上的估计结果。这类被称为Heywood案例的情况以非正方差估计为特征,可能引发数值优化过程中的问题或收敛失败,进而导致误导性推断,特别是在因子得分和模型选择方面。我们推导了模型和对数似然函数惩罚项的充分条件,这些条件能够:i) 保证最大惩罚似然估计存在于参数空间内部,ii) 确保相应估计量具备最大似然估计所期望的优良渐近性质,即一致性和渐近正态性。当惩罚足够"软"时——其程度根据模型参数信息积累情况自适应调整——即可实现一致性和渐近正态性。我们首次正式证明,Akaike(1987)和Hirose等人(2011)对正态线性因子模型对数似然提出的惩罚项满足存在性条件,从而能够处理Heywood案例。然而,这些惩罚项的基础版本可能导致估计、推断和模型选择中产生可疑的有限样本性质。我们提出的最大软惩罚似然框架能够对这些惩罚项进行精细缩放,确保由此产生的估计和推断过程具有渐近最优性。通过全面的模拟研究和实际数据集分析,我们展示了最大软惩罚似然估计量及相关方法所具有的优良有限样本性质。