Linear mixed-effects model (LMM) is a cornerstone of longitudinal data analysis, but is limited to adeptly make heterogeneous analyses predictable under both group-specific fixed effects and subject-specific random effects. To address this challenge, we propose a novel statistical framework by using a large model prototype: a mixed effects mixture of experts model (MEMoE). This framework integrates the divide-and-conquer paradigm of Mixture of Experts Models with classical mixed-effect modeling. In the proposed MEMoE, each expert is a full LMM dedicated to capturing the longitudinal trajectory of a specific latent subpopulation, while another model gating function learns to route subjects to the most appropriate expert in a data-driven manner based on baseline covariates. We develop a robust inferential procedure for parameter estimation based on the Laplace Expectation-Maximization algorithm, with standard errors calibrated using robust sandwich estimators to account for potential model misspecification. Extensive simulation studies and an empirical application demonstrate that MEMoE outperforms both traditional single-population LMM and conventional Mixture of Experts models in terms of parameter recovery, classification accuracy, and overall model fit.
翻译:线性混合效应模型(LMM)是纵向数据分析的基石,但其在同时处理群体特异性固定效应和个体特异性随机效应时,难以有效预测异质性分析。为应对这一挑战,我们基于大模型原型提出了一种新颖的统计框架:混合效应专家混合模型(MEMoE)。该框架将专家混合模型的分治范式与经典混合效应建模相结合。在所提出的MEMoE中,每个专家都是一个完整的LMM,专门用于捕捉特定潜在亚群的纵向轨迹;而另一个模型门控函数则基于基线协变量,以数据驱动的方式学习将受试者路由至最合适的专家。我们基于拉普拉斯期望最大化算法开发了稳健的参数估计推断程序,并使用稳健三明治估计量校准标准误,以应对潜在的模型误设。大量模拟研究和实证应用表明,MEMoE在参数恢复、分类准确性和整体模型拟合方面均优于传统的单群体LMM和常规专家混合模型。