Parameter transfer is a central paradigm in transfer learning, enabling knowledge reuse across tasks and domains by sharing model parameters between upstream and downstream models. However, when only a subset of parameters from the upstream model is transferred to the downstream model, there remains a lack of theoretical understanding of the conditions under which such partial parameter reuse is beneficial and of the factors that govern its effectiveness. To address this gap, we analyze a setting in which both the upstream and downstream models are ReLU convolutional neural networks (CNNs). Within this theoretical framework, we characterize how the inherited parameters act as carriers of universal knowledge and identify key factors that amplify their beneficial impact on the target task. Furthermore, our analysis provides insight into why, in certain cases, transferring parameters can lead to lower test accuracy on the target task than training a new model from scratch. To our best knowledge, our theory is the first to provide a dynamic analysis for parameter transfer and also the first to prove the existence of negative transfer theoretically. Numerical experiments and real-world data experiments are conducted to empirically validate our theoretical findings.
翻译:参数迁移是迁移学习的核心范式,通过在上游模型与下游模型间共享模型参数,实现跨任务和跨领域的知识复用。然而,当仅将上游模型的部分参数迁移至下游模型时,对于此类部分参数复用何时有益以及影响其有效性的关键因素,目前仍缺乏理论上的理解。为填补这一空白,我们分析了一种场景:上游模型与下游模型均为ReLU卷积神经网络(CNN)。在此理论框架内,我们刻画了继承参数如何作为通用知识的载体,并识别了增强其对目标任务有益影响的关键因素。此外,我们的分析揭示了为何在某些情况下,迁移参数可能导致目标任务上的测试精度低于从头训练新模型。据我们所知,我们的理论首次为参数迁移提供了动态分析,同时也是首个在理论上证明负迁移存在的理论。我们通过数值实验与真实数据实验对所提理论发现进行了实证验证。