In mortality modelling, cohort effects are often taken into consideration as they add insights about variations in mortality across different generations. Statistically speaking, models such as the Renshaw-Haberman model may provide a better fit to historical data compared to their counterparts that incorporate no cohort effects. However, when such models are estimated using an iterative maximum likelihood method in which parameters are updated one at a time, convergence is typically slow and may not even be reached within a reasonably established maximum number of iterations. Among others, the slow convergence problem hinders the study of parameter uncertainty through bootstrapping methods. In this paper, we propose an intuitive estimation method that minimizes the sum of squared errors between actual and fitted log central death rates. The complications arising from the incorporation of cohort effects are overcome by formulating part of the optimization as a principal component analysis with missing values. Using mortality data from various populations, we demonstrate that our proposed method produces satisfactory estimation results and is significantly more efficient compared to the traditional likelihood-based approach.
翻译:在死亡率建模中,队列效应常被纳入考量,因为它们有助于理解不同世代间死亡率的差异。从统计学角度看,与不考虑队列效应的模型相比,诸如Renshaw-Haberman模型等能够更好地拟合历史数据。然而,当使用迭代最大似然法(每次更新一个参数)估计此类模型时,收敛速度通常很慢,甚至可能在合理设定的最大迭代次数内无法达到收敛。其中,缓慢的收敛问题阻碍了通过自助法研究参数不确定性的工作。本文提出了一种直观的估计方法,该方法最小化实际对数中心死亡率与拟合值之间的误差平方和。通过将部分优化问题表述为带缺失值的主成分分析,我们克服了引入队列效应所带来的复杂性。利用来自不同人群的死亡率数据,我们证明了所提方法能够产生令人满意的估计结果,并且相较于传统的基于似然的方法,其效率显著提高。