Transfer learning (TL) has emerged as a powerful tool for improving estimation and prediction performance by leveraging information from related datasets. In this paper, we repurpose the control-variates (CVS) method for TL in the context of scalar-on-function regression. Our proposed framework relies exclusively on dataset-specific summary statistics, avoiding the need to pool subject-level data and thus remaining applicable in privacy-restricted or decentralized settings. We establish theoretical connections among several existing TL strategies and derive convergence rates for our CVS-based proposals. These rates explicitly account for the typically overlooked smoothing error and reveal how the similarity among covariance functions across datasets influences convergence behavior. Numerical studies support the theoretical findings and demonstrate that the proposed methods achieve competitive estimation and prediction performance compared with existing alternatives.
翻译:迁移学习(TL)已成为一种通过利用相关数据集信息来提升估计与预测性能的强大工具。本文重新利用控制变量(CVS)方法,在函数标量回归的背景下实现迁移学习。我们提出的框架完全依赖于数据集特定的汇总统计量,避免了汇集个体层面数据的需要,从而在隐私受限或去中心化的环境中仍可适用。我们建立了若干现有迁移学习策略之间的理论联系,并推导了基于CVS方法的收敛速率。这些速率明确考虑了通常被忽略的平滑误差,并揭示了不同数据集间协方差函数的相似性如何影响收敛行为。数值研究支持了理论发现,并证明所提方法在估计和预测性能上与现有替代方案相比具有竞争力。