Latent space models are widely used for analyzing high-dimensional discrete data matrices, such as patient-feature matrices in electronic health records (EHRs), by capturing complex dependence structures through low-dimensional embeddings. However, estimation becomes challenging in the imbalanced regime, where one matrix dimension is much larger than the other. In EHR applications, cohort sizes are often limited by disease prevalence or data availability, whereas the feature space remains extremely large due to the breadth of medical coding system. Motivated by the increasing availability of external semantic embeddings, such as pre-trained embeddings of clinical concepts in EHRs, we propose a knowledge-embedded latent projection model that leverages semantic side information to regularize representation learning. Specifically, we model column embeddings as smooth functions of semantic embeddings via a mapping in a reproducing kernel Hilbert space. We develop a computationally efficient two-step estimation procedure that combines semantically guided subspace construction via kernel principal component analysis with scalable projected gradient descent. We establish estimation error bounds that characterize the trade-off between statistical error and approximation error induced by the kernel projection. Furthermore, we provide local convergence guarantees for our non-convex optimization procedure. Extensive simulation studies and a real-world EHR application demonstrate the effectiveness of the proposed method.
翻译:潜在空间模型通过低维嵌入捕捉复杂依赖结构,被广泛用于分析高维离散数据矩阵(如电子健康记录中的患者-特征矩阵)。然而,在矩阵维度严重不平衡的场景下——即一个维度远大于另一个维度时——参数估计面临挑战。在电子健康记录应用中,队列规模常受疾病流行率或数据可用性限制,而由于医学编码体系的广度,特征空间却极为庞大。受日益丰富的外部语义嵌入资源(如电子健康记录中临床概念的预训练嵌入)的启发,我们提出一种知识嵌入的潜在投影模型,该模型利用语义侧信息对表示学习进行正则化。具体而言,我们通过再生核希尔伯特空间中的映射,将列嵌入建模为语义嵌入的平滑函数。我们开发了一种计算高效的两步估计流程:首先通过核主成分分析构建语义引导的子空间,再结合可扩展的投影梯度下降法进行优化。我们建立了估计误差界,以刻画核投影引起的统计误差与近似误差之间的权衡关系。此外,我们为非凸优化过程提供了局部收敛性保证。大量的模拟研究和真实世界电子健康记录应用验证了所提方法的有效性。