This letter presents a high-dimensional analysis of the training dynamics for a single-layer nonlinear contrastive learning model. The empirical distribution of the model weights converges to a deterministic measure governed by a McKean-Vlasov nonlinear partial differential equation (PDE). Under L2 regularization, this PDE reduces to a closed set of low-dimensional ordinary differential equations (ODEs), reflecting the evolution of the model performance during the training process. We analyze the fixed point locations and their stability of the ODEs unveiling several interesting findings. First, only the hidden variable's second moment affects feature learnability at the state with uninformative initialization. Second, higher moments influence the probability of feature selection by controlling the attraction region, rather than affecting local stability. Finally, independent noises added in the data argumentation degrade performance but negatively correlated noise can reduces the variance of gradient estimation yielding better performance. Despite of the simplicity of the analyzed model, it exhibits a rich phenomena of training dynamics, paving a way to understand more complex mechanism behind practical large models.
翻译:本文对单层非线性对比学习模型的训练动力学进行了高维分析。模型权重的经验分布收敛于由McKean-Vlasov非线性偏微分方程(PDE)控制的确定性测度。在L2正则化条件下,该PDE可简化为一组封闭的低维常微分方程(ODEs),反映了训练过程中模型性能的演化。我们分析了这些ODEs的定点位置及其稳定性,揭示了若干重要发现:首先,在无信息初始化状态下,仅隐藏变量的二阶矩影响特征可学习性;其次,高阶矩通过控制吸引域而非影响局部稳定性来调控特征选择的概率;最后,数据增强中添加的独立噪声会降低性能,而负相关噪声能减小梯度估计方差从而提升性能。尽管所分析模型结构简单,却展现出丰富的训练动力学现象,为理解实际大模型背后的复杂机制提供了新途径。