Pretraining and fine-tuning are central stages in modern machine learning systems. In practice, feature learning plays an important role across both stages: deep neural networks learn a broad range of useful features during pretraining and further refine those features during fine-tuning. However, an end-to-end theoretical understanding of how choices of initialization impact the ability to reuse and refine features during fine-tuning has remained elusive. Here we develop an analytical theory of the pretraining-fine-tuning pipeline in diagonal linear networks, deriving exact expressions for the generalization error as a function of initialization parameters and task statistics. We find that different initialization choices place the network into four distinct fine-tuning regimes that are distinguished by their ability to support feature learning and reuse, and therefore by the task statistics for which they are beneficial. In particular, a smaller initialization scale in earlier layers enables the network to both reuse and refine its features, leading to superior generalization on fine-tuning tasks that rely on a subset of pretraining features. We demonstrate empirically that the same initialization parameters impact generalization in nonlinear networks trained on CIFAR-100. Overall, our results demonstrate analytically how data and network initialization interact to shape fine-tuning generalization, highlighting an important role for the relative scale of initialization across different layers in enabling continued feature learning during fine-tuning.
翻译:预训练与微调是现代机器学习系统的核心阶段。实践中,特征学习在这两个阶段均发挥关键作用:深度神经网络在预训练期间学习广泛的有用特征,并在微调阶段进一步优化这些特征。然而,关于初始化选择如何影响微调过程中特征重用与优化能力的端到端理论理解仍不完善。本文在线性对角网络中建立了预训练-微调流程的解析理论,推导了泛化误差随初始化参数与任务统计量变化的精确表达式。我们发现,不同的初始化选择将网络置于四种不同的微调机制中,这些机制通过其支持特征学习与重用的能力以及适用的任务统计特征而相互区分。具体而言,较早层中较小的初始化尺度使网络既能重用又能优化其特征,从而在依赖预训练特征子集的微调任务上实现更优的泛化性能。我们通过实验证明,在CIFAR-100数据集上训练的非线性网络中,相同的初始化参数同样影响泛化能力。总体而言,我们的研究通过解析方法揭示了数据与网络初始化如何共同塑造微调泛化性能,并强调了不同层间初始化尺度的相对大小对于促进微调期间持续特征学习的重要作用。