Learning systems often expand their ambient features or latent representations over time, embedding earlier representations into larger spaces with limited new latent structure. We study transfer learning for structured matrix estimation under simultaneous growth of the ambient dimension and the intrinsic representation, where a well-estimated source task is embedded as a subspace of a higher-dimensional target task. We propose a general transfer framework in which the target parameter decomposes into an embedded source component, low-dimensional low-rank innovations, and sparse edits, and develop an anchored alternating projection estimator that preserves transferred subspaces while estimating only low-dimensional innovations and sparse modifications. We establish deterministic error bounds that separate target noise, representation growth, and source estimation error, yielding strictly improved rates when rank and sparsity increments are small. We demonstrate the generality of the framework by applying it to two canonical problems. For Markov transition matrix estimation from a single trajectory, we derive end-to-end theoretical guarantees under dependent noise. For structured covariance estimation under enlarged dimensions, we provide complementary theoretical analysis in the appendix and empirically validate consistent transfer gains.
翻译:学习系统常随时间扩展其环境特征或潜在表示,将早期表示嵌入到具有有限新潜在结构的更大空间中。本文研究在环境维度和内在表示同时增长情况下的结构化矩阵估计迁移学习问题,其中已充分估计的源任务作为子空间嵌入更高维的目标任务中。我们提出一个通用迁移框架,其中目标参数分解为嵌入的源分量、低维低秩创新项和稀疏修正项,并开发一种锚定交替投影估计器,该估计器在仅估计低维创新和稀疏修改的同时保持迁移的子空间。我们建立了确定性误差界,将目标噪声、表示增长和源估计误差分离,当秩和稀疏性增量较小时可获得严格改进的收敛速率。通过将该框架应用于两个典型问题,我们证明了其普适性。对于单轨迹马尔可夫转移矩阵估计,我们在相关噪声下推导了端到端的理论保证。对于维度扩展下的结构化协方差估计,我们在附录中提供了补充理论分析,并通过实验验证了持续的迁移增益。